Your face movements determine how you ”hear” what you hearJanuary 24th, 2009 - 12:40 pm ICT by ANI
Washington, Jan 24 (ANI): The movement of facial skin plays a key role not only in the way the sounds of words are made, but also in the way they are heard, says a new study.
“How your own face is moving makes a difference in how you ”hear” what you hear,” said first author Takayuki Ito, a senior scientist at Haskins Laboratories, a Yale-affiliated research laboratory.
When, Ito and his colleagues used a robotic device to stretch the facial skin of “listeners” in a way that would normally accompany speech production they found it affected the way the subjects heard the speech sounds.
The subjects listened to words one at a time that were taken from a computer-produced continuum between the words “head” and “had.”
When the robot stretched the listener’’s facial skin upward, words sounded more like “head.” With downward stretch, words sounded more like “had.” A backward stretch had no perceptual effect.
And, timing of the skin stretch was critical perceptual changes were only observed when the stretch was similar to what occurs during speech production.
These effects of facial skin stretch indicate the involvement of the somatosensory system in the neural processing of speech sounds.
The study shows that there is a broad, non-auditory basis for “hearing” and that speech perception has important neural links to the mechanisms of speech production.
The study “Somatosensory function in speech perception” has been published in the Proceedings of the National Academy of Sciences (PNAS). (ANI)
- Learning to talk changes the way we hear other's speech - Nov 03, 2009
- Infants can tell human from non-human sounds: Study - Jul 18, 2012
- Tone-deaf people lack an important neural pathway - Aug 19, 2009
- World's quietest room absorbs all sound - Apr 04, 2012
- Autistic brains 'focus more on visual skills' - Apr 05, 2011
- Babies understand words in a grown-up way - Jan 08, 2011
- How music training boosts learning - Jul 21, 2010
- New approach brings robot pets closer to reality - Sep 18, 2010
- 'Lingodroid' robots invent language to chat - May 25, 2011
- Scientists develop sensitive skin for robots - Jun 30, 2011
- Musicians'' ''well-tuned'' brains can identify emotion - Mar 04, 2009
- A living robot who smiles, sings like a girl! - Apr 12, 2012
- Robot taught to smile and frown through self-guided learning - Jul 09, 2009
- Here's why we ape others' talking style, speed and even accents - Aug 06, 2010
- Bizarre female robo 'croons by copying human singer' - Oct 15, 2010
Tags: affiliated research, continuum, face movements, facial skin, haskins laboratories, jan 24, key role, listener, listeners, mechanisms of speech, national academy of sciences, perceptual changes, proceedings of the national academy, proceedings of the national academy of sciences, research laboratory, robotic device, somatosensory system, sounds of words, speech perception, speech production