January 7, 2015
New paper out in eLife
Discussed for a long time in order to explain processes in the visual system, IST Austria Professor Gasper Tkacik manages to show that the “efficient coding hypothesis” can be used to predict how complex structures are processed in the central brain. Tkacik and his colleagues presented their findings in a recent eLife paper titled “Variance predicts salience in central sensory processing“ (DOI: http://dx.doi.org/10.7554/eLife.03722). eLife is a new open access journal that reports findings of general significance in life sciences.
Our visual system — from our eyes to the cortical areas of the brain dedicated to image processing — effortlessly handles large amounts of data to extract information of relevance to our everyday lives: what are interesting objects around us, what is moving where in our field of view, who are the people around us. The current understanding is that this processing happens in multiple layers, where raw light intensity signals, picked up by the retina, are successively transformed into more and more high-level representations in the cortex. In each of these processing layers, one can isolate and record individual neurons that act as feature detectors, responding strongly to the particular image feature that that layer is extracting. This hierarchical decomposition of the visual scheme provides a neural substrate, or representation, that other parts of the brain may use to make behaviorally relevant decisions: which object to pick up, what to eat, who to approach, when to flee.
The central idea about such sensory processes is that a good data processing system will use its limited resources — be it the bandwidth of the optical cable linking US and Europe in a man-made communications network, or the finite number of neurons in the brain — in such a way as to maximize the information flow from the relevant external stimuli to the relevant internal representation (the spikes that encode those images in the brain).
The application of information theory to such early visual processing in the retina and primary visual cortex is known under the name of “efficient coding hypothesis”. But the power of this hypothesis has been limited to (i) predictions in the sensory periphery, mostly the retina; and (ii) to predictions that are derived from low order statistical structure of natural scenes, basically, from the way in which, on average, light intensity in natural images co-varies in pairs of pixels at a particular separation.
In their recent publication in eLife Tkacik and his colleagues have used the efficient coding hypothesis to make a prediction about how the human visual system should respond optimally to higher order statistical structure in the visual scenes, and have confirmed this prediction by human psychophysical measurements.
These results are noteworthy because they extend the reach of the powerful efficient coding hypothesis beyond low order correlations in natural scenes and beyond the retina into the central brain. Several convergent lines of evidence point to the fact that the processing of these complex texture-like correlations of light is cortical, and implicate the secondary visual cortex in their detection. The researchers hope that the results “will further motivate the search for predictive principles in neurobiology that go beyond merely describing the phenomenology, to mathematically deriving why such phenomenology has evolved.” (Tkacik)