Brain recordings shed light on how we process what we see

Researchers have captured human neural activity in unprecedented detail to better understand how the brain works to process what we see.

“Because what we see, and our responses to it, are continuously changing, it is challenging to understand how the brain works when taking in new information and then in processing it,” says Jonathan Winawer, a professor of psychology and neuroscience at New York University and the senior author of the paper, which appears in the Journal of Neuroscience.

“This work helps us more deeply appreciate the dynamics of our neural responses to visual images and in ways that can inform future research.”

“We found that both human and animal brains seem to be using a similar ‘toolkit’ of neural calculations…”

The human brain is a vastly complex organ that is dynamic in ways beyond our current understanding. This is especially true when it comes to its activity in the processing of visuals—viewing a simple, static image on a screen unleashes a vast network of neural activity in our brains.

However, developing a robust understanding of these processes requires invasive techniques not typically used with human subjects. Rather, such studies typically measure brain activity using fMRI, MEG, or EEG scanners—methods that only scratch the surface of the complexity of neural operations.

In the new study, Winawer and his colleagues at the University of Amsterdam and Utrecht University adopted a more invasive approach in order to uncover, at an unprecedented level of detail and precision, how the brain processes visual images.

To do so, they studied volunteer epilepsy patients who had been implanted with electrodes in order to measure a specific phenomena—brain activity associated with seizures.

The patients took part in the research by watching pictures on a laptop computer positioned at their hospital bedsides, allowing the neuroscientists to make the rare, new measurements.

Importantly, the readings of brain activity showed that existing computational models developed to explain neural responses can be applied to human brains. These models, based on previous studies of non-human primates, mapped out neural activity for non-humans. But, prior to the new work, it was not clear if these models could be applied to humans.

More specifically, the results showed that these models can accurately predict changes in human brain activity for a variety of changes in a visually presented image—for example, how much longer the neurons remain active when a stimulus remains on the screen for twice as long or how much they decrease their activity when an image is shown for a second time.

The fact that a single computational model can predict these different phenomena, the researchers note, suggests that the apparent complexity in neural dynamics in both human and non-human primate brains could result from just a handful of neural computations—knowledge that may yield advances in technology.

“We found that both human and animal brains seem to be using a similar ‘toolkit’ of neural calculations to make sense of the continuous stream of inputs arriving from our senses,” explains Iris Groen, an assistant professor at the University of Amsterdam and the paper’s lead author.

“Understanding how and why these dynamics unfold as they do is an important part of understanding how the brain represents the outside world and will help us to learn how we can make machine vision more human-like.”

Support for the research came from the National Institutes of Health BRAIN Initiative.

Source: NYU