UC SANTA BARBARA (US) — When we look for something, we rely on environmental cues and scene context. New research shows where in the brain this process occurs.
Our brains developed this pattern of search over the millennia of human evolution, It’s an ability that has not only helped us find food and avoid danger in humankind’s earliest days, but also continues to aid us today, in tasks like driving to work, going shopping, and reading X-rays.
Though a seemingly simple and intuitive strategy, that visual search function—a process that takes mere seconds for the human brain—is still something that a computer, despite technological advances, can’t do as accurately.
The researchers flashed these photos of scenes before the subjects. Highlighted spots indicate where the subjects indicated the most likely area to contain the object named in each scene. Superimposed is a back view of one hemisphere of the brain; the red area is the location of the Lateral Occipital Complex. (Credit: UC Santa Barbara)
“Behind what seems to be automatic is a lot of sophisticated machinery in our brain,” says Miguel Eckstein, professor in University of California, Santa Barbara’s department of psychological & brain sciences. “A great part of our brain is dedicated to vision.”
Where this—the search for objects using scene and other objects—occurs in the brain is little understood, and is for the first time discussed in a paper published recently in the Journal of Neuroscience.
‘Made you look’
The researchers flashed hundreds images of indoor and outdoor scenes before observers, and instructed them to search for certain objects that were consistent with those scenes. Half of the images, however, did not contain the target object. During the trials, the subjects were asked to indicate whether the target object was present in the scene.
The researchers were particularly interested in the images that did not contain the target. Another measure was taken to determine where subjects expected specific objects to be in target-absent scenes.
Invariably, the subjects would indicate similar areas: If presented with a living room scene and told to look for a clock or a painting, they would indicate the wall; if shown a photo of a bathroom and told to indicate where to expect a hand soap or toothbrush, they would indicate the sink.
The searched object’s contextual location in the scenes, according to the study, is represented in the area called the lateral occipital complex (LOC), a place that corresponds roughly to the lower back portion of the head, toward the side. This area, according to Eckstein, has the ability to account for other objects in the scene that often appear in close spatial proximity with the searched object—something computers are only recently being taught to do.
“So, if you’re looking for a computer mouse on a cluttered desk, a machine would be looking for things shaped like a mouse. It might find it, but it might see other objects of similar shape, and classify that as a mouse,” Eckstein says. Computer vision systems might also not associate their target with specific locations or other objects. So, to a machine, the floor is just as likely a place for a mouse as a desk.
The LOC, on the other hand, would contain the information the brain needs to direct a person’s attention and gaze first toward the most likely place that a mouse might be, such as on top of the desk, or near the keyboard. From there, other visual parts of the brain go to work, searching for particular characteristics, or determining the target’s presence.
So strong is the scene context in biasing search, says Eckstein, that if another similar-looking object was placed in the location where the mouse is likely to be, and that scene briefly flashed before your eyes, you would likely—erroneously—interpret that object as the mouse.
While scene context information has been found highly active in the LOC, other visual areas of the brain are also influenced by context to certain degrees, including the interparietal sulcus, located near the top of the head; and the retrosplenial cortex, found in the brain’s interior.
“Since contextual guidance is a critical strategy that allows humans to rapidly find objects in scenes, studying the brain areas involved in normal humans might help us to gain a better understanding of neural areas involved in those with visual search deficits, such as brain-damaged patients and the elderly,” Eckstein says.
“Also, a large component of becoming an expert searcher—like radiologists or fishermen—is exploiting contextual relationships to search. Thus, understanding the neural basis of contextual guidance might allow us to gain a better understanding about what brain areas are critical to gain search expertise.”
Additional researchers from the Institute for Collaborative Biotechnologies at UC Santa Barbara contributed to the study, which was supported by the National Eye Institute.
Source: UC Santa Barbara