What we see increases our understanding of what we hear by six-foldMarch 4th, 2009 - 12:13 pm ICT by ANI
Washington, March 4 (ANI): Why does it often become difficult to understand what a friend is saying during a noisy party without seeing his/her face? Well, according to a new study, it so happens because the visual information absorbed while seeing something can improve the understanding of the spoken words by as much as six-fold.
Lead researcher Dr. Wei Ji Ma, an assistant professor of Neuroscience at Baylor College of Medicine in Houston, points out that the brain uses the information derived from a person’’s face and lip movements to help interpret what others hear, and that this benefit increases when the sound quality rises to moderately noisy.
“Most people with normal hearing lip-read very well, even though they don”t think so. At certain noise levels, lip-reading can increase word recognition performance from 10 to 60 percent correct,” Ma says in a study report, published in the open access journal PLoS ONE.
The researcher, however, adds that lip-reading becomes difficult when the environment is very noisy, or when the voice someone is trying to understand is very faint.
“We find that a minimum sound level is needed for lip-reading to be most effective,” says Ma.
This is the first time that any study has focussed on word recognition in a natural setting, where people freely report what they believe is being said.
Working in collaboration with researchers from the City College of New York, Ma and his colleagues developed a mathematical model that allowed them to predict how successful a person will be at integrating the visual and auditory information.
According to Ma, people actually combine the two stimuli close to optimally, and what they perceive depends upon the reliability of the stimuli.
“Suppose you are a detective. You have two witnesses to a crime. One is very precise and believable. The other one is not as believable. You take information from both and weigh the believability of each in your determination of what happened,” he said.
He further said that lip-reading, in a way, involved the same kind of integration of information in the brain.
During the experiments, the participants were shown videos in which either a person spoke a word or a cartoon character that would not truly mouth the word.
The researchers observed that when the person was presented normally, the visual information provided a great benefit when it was integrated with the auditory information, especially when there is moderate background noise.
But when the person was just a “cartoon”, the visual information was still helpful, though not as much.
In another study, the person mouths one word but the audio projects another, and often the brain integrates the two stimuli into a totally different perceived word.
“The mathematical model can predict how often the person will understand the word correctly in all these contexts,” Ma said. (ANI)
Tags: assistant professor, baylor college of medicine, baylor college of medicine in houston, believability, city college of new york, college of medicine, lip movements, lip reading, mathematical model, neuroscience, noise levels, open access, plos one, recognition performance, sound quality, spoken words, stimuli, two witnesses, visual information, word recognition