If you think self-driving cars can’t get here soon enough, you’re not alone. But programming computers to recognize objects is very technically challenging, especially since scientists don’t fully understand how our own brains do it.
Now, Salk Institute researchers have analyzed how neurons in a critical part of the brain, called V2, respond to natural scenes, providing a better understanding of vision processing. The work is described in Nature Communications on June 8, 2017.
“Understanding how the brain recognizes visual objects is important not only for the sake of vision, but also because it provides a window on how the brain works in general,” says Tatyana Sharpee, an associate professor in Salk’s Computational Neurobiology Laboratory and senior author of the paper. “Much of our brain is composed of a repeated computational unit, called a cortical column. In vision especially we can control inputs to the brain with exquisite precision, which makes it possible to quantitatively analyze how signals are transformed in the brain.”
If you think self-driving cars can’t get here soon enough, you’re not alone. But programming computers to recognize objects is very technically challenging, especially since scientists don’t fully understand how our own brains do it.
Now, Salk Institute researchers have analyzed how neurons in a critical part of the brain, called V2, respond to natural scenes, providing a better understanding of vision processing. The work is described in Nature Communications on June 8, 2017.
“Understanding how the brain recognizes visual objects is important not only for the sake of vision, but also because it provides a window on how the brain works in general,” says Tatyana Sharpee, an associate professor in Salk’s Computational Neurobiology Laboratory and senior author of the paper. “Much of our brain is composed of a repeated computational unit, called a cortical column. In vision especially we can control inputs to the brain with exquisite precision, which makes it possible to quantitatively analyze how signals are transformed in the brain.”
Although we often take the ability to see for granted, this ability derives from sets of complex mathematical transformations that we are not yet able to reproduce in a computer, according to Sharpee. In fact, more than a third of our brain is devoted exclusively to the task of parsing visual scenes.
Our visual perception starts in the eye with light and dark pixels. These signals are sent to the back of the brain to an area called V1 where they are transformed to correspond to edges in the visual scenes. Somehow, as a result of several subsequent transformations of this information, we then can recognize faces, cars and other objects and whether they are moving. How precisely this recognition happens is still a mystery, in part because neurons that encode objects respond in complicated ways.
Read more at Salk Institute
Illustration: The illustration on the right shows how the brain’s V1 and V2 areas might use information about edges and textures to represent objects like the teddy bear on the left. (Credit: Salk Institute)