UC BERKELEY (US) — Your brain is listening to what you say—even in a noisy room—simultaneously filtering unwanted noise while amplifying the sounds you make and hear.
“We used to think that the human auditory system is mostly suppressed during speech, but we found closely knit patches of cortex with very different sensitivities to our own speech that paint a more complicated picture,” says Adeen Flinker, a doctoral student in neuroscience at the University of California, Berkeley, and lead author of a new study reported in the Journal of Neuroscience.
Researchers tracked the electrical signals emitted from the brains of hospitalized epilepsy patients. They discovered that neurons in one part of the patients’ hearing mechanism were dimmed when they talked, while neurons in other parts lit up.
In the study, researchers examined the electrical activity in the healthy brain tissue of patients who were being treated for seizures. The patients had volunteered to help out in the experiment during lulls in their treatment, as electrodes had already been implanted over their auditory cortices to track the focal points of their seizures.
They instructed the patients to perform such tasks as repeating words and vowels they heard, and recorded the activity. In comparing the activity of electrical signals discharged during speaking and hearing, they found that some regions of the auditory cortex showed less activity during speech, while others showed the same or higher levels.
“This shows that our brain has a complex sensitivity to our own speech that helps us distinguish between our vocalizations and those of others, and makes sure that what we say is actually what we meant to say,” Flinker says.
Previous studies have shown a selective auditory system in monkeys that can amplify their self-produced mating, food and danger alert calls, but until this latest study, it was not clear how the human auditory system is wired.
“We found evidence of millions of neurons firing together every time you hear a sound right next to millions of neurons ignoring external sounds but firing together every time you speak,” Flinker says. “Such a mosaic of responses could play an important role in how we are able to distinguish our own speech from that of others.”
While the study doesn’t specifically address why humans need to track their own speech so closely, Flinker theorizes that, among other things, tracking our own speech is important for language development, monitoring what we say and adjusting to various noise environments.
“Whether it’s learning a new language or talking to friends in a noisy bar, we need to hear what we say and change our speech dynamically according to our needs and environment,” Flinker says.
He notes that people with schizophrenia have trouble distinguishing their own internal voices from the voices of others, suggesting that they may lack this selective auditory mechanism. The findings may be helpful in better understanding some aspects of auditory hallucinations, he adds.
Moreover, with the finding of sub-regions of brain cells each tasked with a different volume control job—and located just a few millimeters apart—the results pave the way for a more detailed mapping of the auditory cortex to guide brain surgery.
Researchers from the University of California, San Francisco, and Johns Hopkins University collaborated on the work.
More news from UC Berkeley: http://newscenter.berkeley.edu/