Ετικέτες

Παρασκευή 25 Μαΐου 2018

Model-free and model-based reward prediction errors in EEG

S10538119.gif

Publication date: September 2018
Source:NeuroImage, Volume 178
Author(s): Thomas D. Sambrook, Ben Hardwick, Andy J. Wills, Jeremy Goslin
Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain.



https://ift.tt/2KVj2U9

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αναζήτηση αυτού του ιστολογίου