Seeing While Hearing Speeds Brain's Processing of Speech

While the R&B classic "I Heard It Through the Grapevine" advises you to "believe half of what you see and none of what you hear," a University of Maryland study has found that seeing and hearing together speed up the brain's ability to process what someone is saying -- whether or not they're speaking the truth.
The study, published in the Proceedings of the National Academy of Sciences, combines neuroscience and linguistics to confirm for the first time that seeing the speaker talk -- called visual speech -- helps the brain process the words they are saying -- the auditory speech -- faster than if the words are heard only.

David Poeppel, associate professor of linguistics at Maryland and senior author of the study, says the study indicates that when a listener can see the speaker's mouth, the listener's brain predicts what sound is about to be heard, a process called predictive coding.

"Moving the mouth comes before the sound," Poeppel said. "The brain uses the slightly preceding visual information to make a prediction, almost instantaneously, of what the sound will be."

That combination of visual and auditory speech, says Poeppel, "gives you the information to get to recognition faster and more accurately" than is done by hearing alone.

Poeppel, along with his Ph.D. student Virginie van Wassenhove and Ken W. Grant, of the Walter Reed Army Medical Center, arrived at their findings by doing an EEG (Electroencephalographic) study of 26 participants, all native American English speakers.

Their brain activity was measured while they listened to just auditory speech and again while they watched video in which a woman spoke "pa," "ta" and "ka," common English syllables.

Not only did the brain process the sounds faster when visual and auditory speech were combined, it took "less effort" to reach recognition earlier.

"This discovery contradicts the commonly held notion that audio and visual speech together are more than the sum of their parts," Poeppel said. "It actually takes 'less' brain activity to process the information and do it in less time."

The study also gives the first neurological evidence to support the Analysis-by-Synthesis model of speech processing, Morris Halle's and Ken Stevens's 1950's model based on the notion of predictive coding.

"This is the first time to connect audiovisual speech to the theory," says Poeppel. "We're increasingly learning about the importance of multi-sensory integration."

The study was supported by a grant from the National Institutes of Health.

Source: University of Maryland

Citation: Seeing While Hearing Speeds Brain's Processing of Speech (2005, January 15) retrieved 18 April 2024 from https://phys.org/news/2005-01-brain-speech.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

NASA's Perseverance rover deciphers ancient history of Martian lake

0 shares

Feedback to editors