Updating dopamine reward signals

https://doi.org/10.1016/j.conb.2012.11.012Get rights and content
Under a Creative Commons license
open access

Recent work has advanced our knowledge of phasic dopamine reward prediction error signals. The error signal is bidirectional, reflects well the higher order prediction error described by temporal difference learning models, is compatible with model-free and model-based reinforcement learning, reports the subjective rather than physical reward value during temporal discounting and reflects subjective stimulus perception rather than physical stimulus aspects. Dopamine activations are primarily driven by reward, and to some extent risk, whereas punishment and salience have only limited activating effects when appropriate controls are respected. The signal is homogeneous in terms of time course but heterogeneous in many other aspects. It is essential for synaptic plasticity and a range of behavioural learning situations.

Highlights

► Dopamine prediction errors are influenced by model-based information. ► Dopamine neurons show limited activations to punishers when proper controls are made. ► Dopamine neurons do not code salience to a substantial extent. ► Intact dopamine mechanisms are required for learning and posysynaptic plasticity.

Cited by (0)