The Role of Task States and Offline Sampling in Decision Making and Learning
Reinforcement learning (RL) theory has been an important framework for understanding learning and decision making computationally and neuroscientifically.
Against the background of foundational findings such as dopaminergic prediction errors, it has become a key challenge to explain how RL’s appealingly simple updating algorithms can remain useful when the available sensory data is (a) noisy and exuberant, (b) ambiguous when considered on its own, and (c) environmental dynamics are governed by transitions between many possible states.
This challenge has raised questions about how rich experiences are mapped onto abstract states that are associated with values, and how transitions between states can be computed efficiently.
In my talk, I will argue that investigating both issues is crucial for understanding the origins of well known RL mechanisms elsewhere in the brain, and that state inference and replay might be related.
Moreover, I will provide evidence that orbitofrontal cortex and the hippocampus are related to state inference and sequential state transition replay, respectively, and show methodological approaches to measuring these processes in humans using fMRI.