Speech perception involves multiple senses, not just hearingFebruary 12th, 2009 - 4:42 pm ICT by ANI
Washington, February 12 (ANI): While people generally think of speech as being something they hear, a report now suggests that speech perception involves multiple senses.
Published in the journal Current Directions in Psychological Science, the report says that the brain treats speech as something that people hear, see, and even feel.
Psychologist Lawrence D. Rosenblum of the University of California, Riverside, says that people receive a lot of speech information via visual cues, such as lip-reading, and this type of visual speech occurs throughout all cultures.
He says that it is not just information from lips when someone is speaking, people even note the movements of the teeth, tongue and other non-mouth facial features.
The researcher says that it is likely that human speech perception has evolved to integrate many senses together, that is, speech is not meant to be just heard, but also to be seen.
The McGurk Effect is a well-characterized example of the integration between what we see and what we hear when someone is speaking to us.
Rosenblum points out that the phenomenon occurs when a sound is dubbed with a video showing a face making a different soundfor example, the audio may be playing “ba”, while the face looks as though it is saying “va”.
When confronted with this, we will usually hear “va” or a combination of the two sounds, such as “da.”
The McGurk Effect occurs even when participants are aware of the dubbing or told to concentrate only on the audio.
Rosenblum says that this is evidence that once senses are integrated together, it is not possible to separate them.
According to him, studies conducted recently show that this integration occurs very early in the speech process, even before the basic units of speech are established.
Rosenblum suggests that physical movement of speechthe movement of mouths and lipscreates acoustic and visual signals that have a similar form.
He argues that as far as the speech brain is concerned, the auditory and visual information are never really separate, which is why people integrate speech so readily and in such a way that the audio and visual speech signals become indistinguishable from one another.
Rosenblum concludes that visual-speech research has a number of clinical implications, especially in the areas of autism, brain injury and schizophrenia and that “rehabilitation programs in each of these domains have incorporated visual-speech stimuli.” (ANI)
- Here's why we ape others' talking style, speed and even accents - Aug 06, 2010
- Humans subconsciously mimic other accents - Aug 08, 2010
- Hockeyroos further depleted, McGurk pulls out - Sep 27, 2010
- How migratory butterflies use 'compass and sunlight' to find their way - Jan 27, 2011
- How nerve cells unravel jumbled information - Nov 21, 2011
- Infants can tell human from non-human sounds: Study - Jul 18, 2012
- What we see increases our understanding of what we hear by six-fold - Mar 04, 2009
- New computational model could tell how the brain recognizes objects - Jun 09, 2010
- How blind people can better process sound - Mar 17, 2011
- Conventional wisdom that our perceptual abilities improve as we grow challenged - Jun 28, 2009
- Sign language speakers use hands and mouth separately - Aug 24, 2010
- Dream content measured for first time - Oct 30, 2011
- Bengal minister lampoons Mamata's 'secret formula' - Apr 06, 2011
- Women's luscious red lips - the biggest turn on for men! - Nov 25, 2010
- Cellphone holding shows your dominant side of brain - Feb 22, 2012
Tags: combination of the two, facial features, february 12, human speech, lip reading, mcgurk effect, mouths, phenomenon, psychological science, psychologist, researcher, rosenblum, senses, speec, speech perception, two sounds, university of california riverside, visual cues, visual signals, visual speech