Dec. 09, 2010
NewsScientific News

Error Correction

Visual Circuits do Error Correction on the Fly

The brain's visual neurons continually develop predictions of what they will perceive and then correct erroneous assumptions as they take in additional external information. Duke University researcher Tobias Egner proposes a new mechanism for visual cognition which challenges the currently held model of sight and could change the way neuroscientists study the brain. The new vision model is called predictive coding. It is more complex and adds an extra dimension to the standard model of sight. The prevailing model has been that neurons process incoming data from the retina through a series of hierarchical layers. In this bottom-up system, the lower neurons first detect an object's features, such as horizontal or vertical lines. The neurons send that information to the next level of brain cells that identify other specific features and feed the emerging image to the next layer of neurons, which add additional details. The image travels up the neuron ladder until it is completely formed. But new brain imaging data provides "clear and direct evidence" that the standard picture of vision, called feature detection, is incomplete. The data, published Dec. 8 in the Journal of Neuroscience, show that the brain predicts what it will see and edits those predictions in a top-down mechanism, said Egner, who is an assistant professor of psychology and neuroscience. In this system, the neurons at each level form and send context-sensitive predictions about what an image might be to the next lower neuron level. The predictions are compared with the incoming sensory data. Any mismatches, or prediction errors, between what the neurons expected to see and what they observe are sent up the neuron ladder. Each neuron layer then adjusts its perceptions of an image in order to eliminate prediction error at the next lower layer. Finally, once all prediction error is eliminated, the visual cortex has assigned its best guess interpretation of what an object is, and a person actually sees the object. Egner and his colleagues wanted to capture the process almost as it happened. The team used functional Magnetic Resonance Imaging, or fMRI, brain scans of the fusiform face area (FFA), a region that deals with recognizing faces.

The researchers monitored 16 subjects' brains as they observed faces or houses framed in different colored boxes that predicted the likelihood of the picture being a face or house. Study participants were told to press a button when they observed an inverted image of a face or house, but the researchers were measuring something else. By changing the face-frame or house-frame color combination, the researchers controlled and measured the FFA neural response to tease apart responses to the stimulus, face expectation and error processing. If the feature detection model were correct, the FFA neural response should be stronger for faces than houses, irrespective of the subjects' expectations. But Egner and his colleagues found that if subjects had a high expectation of seeing a face, their neural response was nearly the same whether they were actually shown a face or a house. The study goes on to use computational modeling to show that this pattern of neural activation can only be explained by a shared contribution from face expectation and prediction error. www.dukenews.duke.edu

Register now!

The latest information directly via newsletter.

To prevent automated spam submissions leave this field empty.