Headlines at Hopkins
News Release

Office of News and Information
Johns Hopkins University
901 South Bond Street, Suite 540
Baltimore, Maryland 21231
Phone: 443-287-9960 | Fax: 443-287-9920

November 19, 2007
FOR IMMEDIATE RELEASE
CONTACT: Lisa De Nike
(443) 287-9960
lde@jhu.edu


How Do We Make Sense of What We See?
Johns Hopkins researchers identify how brains
rationalize ambiguous visual data

M.C. Escher's ambiguous drawings transfix us: Are those black birds flying against a white sky or white birds soaring out of a black sky?

Lines in Escher's drawings can seem to be part of either of two different shapes. How does our brain decide which of those shapes to "see?" In a situation where the visual information provided is ambiguous — whether we are looking at Escher's art or looking at, say, a forest — how do our brains settle on just one interpretation?

In a study published this month in Nature Neuroscience, researchers at The Johns Hopkins University demonstrate that brains do so by way of a mechanism in a region of the visual cortex called V2.

That mechanism, the researchers say, identifies "figure" and "background" regions of an image, provides a structure for paying attention to only one of those two regions at a time and assigns shapes to the collections of foreground "figure" lines that we see.
Rudiger_von_der_Heydt
Rudiger von der Heydt

"What we found is that V2 generates a foreground-background map for each image registered by the eyes," said Rudiger von der Heydt, a neuroscientist, professor in the university's Zanvyl Krieger Mind/Brain Institute and lead author on the paper. "Contours are assigned to the foreground regions, and V2 does this automatically within a tenth of a second."

The study was based on recordings of the activity of nerve cells in the V2 region in the brain of macaques, whose visual systems are much like that of humans. V2 is roughly the size of a microcassette and is located in the very back of the brain. Von der Heydt said the foreground- background "map" generated by V2 also provides the structure for conscious perception in humans.

"Because of their complexity, images of natural scenes generally have many possible interpretations, not just two, like in Escher's drawings," he said. "In most cases, they contain a variety of cues that could be used to identify fore- and background, but oftentimes, these cues contradict each other. The V2 mechanism combines these cues efficiently and provides us immediately with a rough sketch of the scene."

Von der Heydt called the mechanism "primitive" but generally reliable. It can also, he said, be overridden by decision of the conscious mind.

"Our experiments show that the brain can also command the V2 mechanism to interpret the image in another way," he said. "This explains why, in Escher's drawings, we can switch deliberately" to see either the white birds or the dark birds.

The mechanism revealed by this study is part of a system that enables us to search for objects in cluttered scenes, so we can attend to the object of our choice and even reach out and grasp it.

"We can do all of this without effort, thanks to a neural machine that generates visual object representations in the brain," von der Heydt said. "Better yet, we can access these representations in the way we need for each specific task. Unfortunately, how this machine' works is still a mystery to us. But discovering this mechanism that so efficiently links our attention to figure-ground organization is a step toward understanding this amazing machine."

Understanding how this brain function works is more than just interesting: It also could assist researchers in unraveling the causes of — and perhaps identifying treatment for — visual disorders such as dyslexia.

Other authors include Fangtu T. Qiu and Tadashi Sugihara, both of the Zanvyl Krieger Mind-Brain Institute. Funding for the research was provided by the National Institutes of Health.

Related Web sites:

> http://www.mb.jhu.edu/vonderheydt.asp
> http://neuroscience.jhu.edu/RudigervonderHeydt.php

PDF copies of the Nature Neuroscience article are available. Contact Lisa De Nike at Lde@jhu.edu or 443-287-9906. Also available are digital images of von der Hedyt.