dyslexia

To read words, brain detects motion

STANFORD (US) — Motion, not just the black-and-white contrast of the printed word, can help us recognize words, and thus to read, a new study shows.

The finding not only demonstrates the flexibility of the human visual system, but it also may have implications for dyslexia and other reading disorders.

An area of the brain called the Visual Word Form Area, or VWFA, is activated whenever it sees something that looks like a word—and is so adept at packaging visual input for the brain’s language centers that activation happens within a few tens of milliseconds.

[sources]

The problem of picking out words from a visual scene is strikingly complex—complex enough that it is used to distinguish human Internet users from automated software programs. If you’ve ever been asked to type out a distorted word before gaining access to your email—a security test known as a captcha—you’ve proven that you’re a better reader than your computer.

In a new study published in the journal Neuron, neuroscientists detail the discovery that one key to the VWFA function is its ability to recognize words through more than one visual pathway.

VWFA “didn’t originally evolve for reading,” explains primary author Andreas Rauschecker, an MD/PhD candidate at Stanford University. “We likely invented reading to give the VWFA what it likes to see.”

Located in the ventral occipitotemporal cortex at the back of the brain, the VWFA appears to act as a relay station between the primary visual cortex and the brain regions dedicated to language recognition and production. As an individual’s reading ability improves, the VWFA has been shown to expand into neighboring brain regions, including the region devoted to facial recognition.

But what does the VWFA find appealing about words? Traditionally, researchers have thought of words as defined by “luminance contrast”—black letters on white paper, for instance. Rauschecker, however, was interested in a potential alternate pathway.

Instead of being “luminance-defined,” words can be “motion-defined,” distinguishable from their background not by color or contrast, but by their apparent direction of movement. Against a field of dots moving one way, words made up of dots moving in the other direction will “pop out” to most viewers, even if the word and background dots are the same shade.

“In some ways, this is an especially extreme version of a captcha,” Rauschecker says.

Participants in the study were asked to read while their brains were scanned by a functional MRI (fMRI) machine. The researchers presented the participants with various types of words, defined by either motion or luminance contrast, and watched for activation of the VWFA.

The researchers reasoned that, if the VWFA were only looking for a basic visual feature, such as the shapes of black-on-white letters, it shouldn’t activate in the presence of motion-defined words. But scans showed the area responded equally to all legible words.

The result implied that the VWFA can receive information from the human MT complex, or hMT+: a region of the visual cortex necessary for motion perception.

The fMRI scans showed that the hMT+ did activate in the presence of motion-defined words, although it was unresponsive to other types of words. This finding suggested the existence of two separate visual pathways to the VWFA.

The researchers also precisely targeted transcranial magnetic stimulation to each individual’s hMT+. The technique, which applies a rapidly changing magnetic field to induce an electric current in the brain, can be used to briefly inject noise into specific brain regions, temporarily disrupting function. Stimulation dramatically reduced reading performance of motion-defined words while leaving luminance-contrast–defined words unaffected.

“How exactly the information ends up in the VWFA depends on the specific visual features,” Rauschecker says. “There’s very flexible routing.”

Furthermore, the pathways seem to be partially additive. A word that is defined by both motion and color inspires a stronger VWFA response than a word defined by one or the other, offering the possibility of compensating for specific reading disabilities by designing electronic typefaces that could re-route visual information through undamaged areas of the brain.

A digital font with a movement component could potentially increase legibility for some of those who have difficulty reading.

The participation of hMT+ in the pathway is particularly interesting, as previous studies have shown that the region is less responsive to motion in dyslexics.

“That was something of a random finding,” says Rauschecker. “There was no reason to think that, by showing people a moving stimulus, you should be able to predict their reading ability.”

The research raises as many questions as it answers. Motion-defined words are an unusual stimulus, and the role of hMT+ in normal reading is still unclear. It may be involved in a reader’s ability to switch rapidly from one word to the next in a sentence, though this remains only a theory.

Even the VWFA itself isn’t the end of the word-recognition story. Participants also performed a decision task during their fMRI scans: identifying words as either real words or nonsense words. VWFA activation was necessary for correct identification–but not sufficient.

Brian Wandell, a professor of psychology at Stanford, is the paper’s senior author. Funding for the research was provided by the Bio-X Graduate Student Fellowship and the National Institutes of Health.

More news from Stanford University: http://news.stanford.edu/

Related Articles