2 brain systems team up to perceive places

(Credit: Getty Images)

The brain has two distinct systems for perceiving our environment, one for recognizing a place and another for navigating through it, according to a new study.

Nearly 30 years ago, scientists demonstrated that visually recognizing an object, such as a cup, and performing a visually guided action, such as picking the cup up, involved distinct neural processes, located in different areas of the brain. The new study shows that the same is true for how the brain understands the environments around us.

Researchers based their work on experiments using functional magnetic resonance imaging (fMRI). The results showed that the brain’s parahippocampal place area responded more strongly to a scene recognition task while the occipital place area responded more to a navigation task.

The work could have important implications for helping people to recover from brain injuries and for the design of computer vision systems, such as self-driving cars.

Separate systems

“It’s thrilling to learn what different regions of the brain are doing,” says senior author Daniel Dilks, an assistant professor of psychology at Emory University. “Learning how the mind makes sense of all the information that we’re bombarded with every day is one of the greatest of intellectual quests. It’s about understanding what makes us human.”

Entering a place and recognizing where you are—whether it’s a kitchen, a bedroom, or a garden—occurs instantaneously and you can almost simultaneously make your way around it.

“People assumed that these two brain functions were jumbled up together—that recognizing a place was always navigationally relevant,” says first author Andrew Persichetti, who worked on the study as a graduate student. “We showed that’s not true, that our brain has dedicated and dissociable systems for each of these tasks. It’s remarkable that the closer we look at the brain the more specialized systems we find—our brains have evolved to be super efficient.”

Persichetti, who now works at the National Institute of Mental Health, explains that an interest in philosophy led him to neuroscience. “Immanuel Kant made it clear that if we can’t understand the structure of our mind, the structure of knowledge, we’re not going to fully understand ourselves, or even a lot about the outside world, because that gets filtered through our perceptual and cognitive processes,” he says.

The Dilks lab focuses on mapping how the visual cortex is functionally organized. “We are visual creatures and the majority of the brain is related to processing visual information, one way or another,” Dilks says.

The brain and perception

Researchers have wondered since the late 1800s why people suffering from brain damage sometimes experience strange visual consequences. For example, someone might have normal visual function in all ways except for the ability to recognize faces.

It was not until 1992, however, that David Milner and Melvyn Goodale came out with an influential paper delineating two distinct visual systems in the brain. The ventral stream, or the temporal lobe, is involved in object recognition and the dorsal stream, or the parietal lobe, guides an action related to the object.

In 1997, MIT’s Nancy Kanwisher and colleagues demonstrated that a region of the brain is specialized in face perception—the fusiform face area, or FFA. Just a year later, Kanwisher’s lab delineated a neural region specialized in processing places, the parahippocampal place area (PPA), located in the ventral stream.

While working as a postdoctoral fellow in the Kanwisher lab, Dilks led the finding of a second region of the brain specialized in processing places, the occipital place area, or OPA, located in the temporal stream.

Among the first questions Dilks wanted to tackle after setting up his own lab in 2013 (the same year of that discovery) was why the brain had two regions dedicated to processing places.

Delegated duties

Persichetti designed an experiment to test the hypothesis that place processing was divided in the brain in a manner similar to object processing. Using software from the SIMS life simulation game, he created three digital images of places: A bedroom, a kitchen, and a living room. Each room had a path leading through it and out one of three doors.

Researchers asked study participants in the fMRI scanner to fixate their gaze on a tiny white cross. On each trial, an image of one of the rooms then appeared, centered behind the cross. Participants were asked to imagine they were standing in the room and indicate through a button press whether it was a bedroom, a kitchen, or a living room.

On separate trials, researchers asked the same participants to imagine that they were walking on the continuous path through the exact same room and indicate whether they could leave through the door on the left, in the center, or on the right.

The resulting data showed that the two brain regions were selectively activated depending on the task: The PPA responded more strongly to the recognition task while the OPA responded more strongly to the navigation task.

Reverse-engineering the brain

“While it’s incredible that we can show that different parts of the cortex are responsible for different functions, it’s only the tip of the iceberg,” Dilks says. “Now that we understand what these areas of the brain are doing we want to know precisely how they’re doing it and why they’re organized this way.”

Dilks plans to run causal tests on the two scene-processing areas. Repetitive transcranial magnetic stimulation, or rTMS, is a non-invasive technology that attaches to the scalp to temporarily deactivate the OPA in healthy participants and test whether someone can navigate without it.

The same technology doesn’t work to deactivate the PPA, however, due to its deeper location in the temporal lobe. The Dilks lab plans to recruit participants suffering brain injury to the PPA region to test for any effects on their ability to recognize scenes.

Clinical applications for the research include more precise guidance for surgeons who operate on the brain and better brain rehabilitation methods.

“My ultimate goal is to reverse-engineer the human brain’s visual processes and replicate it in a computer vision system,” Dilks says. “In addition to improving robotic systems, a computer model could help us to more fully understand the human mind and brain.”

The research appears in the Journal of Neuroscience.

Source: Emory University