How we combine faces and voices to recognise peopleMarch 10th, 2011 - 2:52 pm ICT by ANI
Washington, Mar 10 (ANI): A new study has explained how we combine faces and voices to recognise people.
Human social interactions are shaped by our ability to recognise people. Faces and voices are known to be some of the key features that enable us to identify individual people, and they are rich in information such as gender, age, and body size, that lead to a unique identity for a person.
A large body of neuropsychological and neuroimaging research has already determined the various brain regions responsible for face recognition and voice recognition separately, but exactly how our brain goes about combining the two different types of information (visual and auditory) is still unknown.
Now, the new study has revealed the brain networks involved in this “cross-modal” person recognition.
A team of researchers in Belgium used functional magnetic resonance imaging (fMRI) to measure brain activity in 14 participants while they performed a task in which they recognised previously learned faces, voices, and voice-face associations.
Dr Frederic Joassin, Dr Salvatore Campanella, and colleagues compared the brain areas activated when recognising people using information from only their faces (visual areas), or only their voices (auditory areas), to those activated when using the combined information.
They found that voice-face recognition activated specific “cross-modal” regions of the brain, located in areas known as the left angular gyrus and the right hippocampus.
Further analysis also confirmed that the right hippocampus was connected to the separate visual and auditory areas of the brain.
Recognising a person from the combined information of their face and voice therefore relies not only on the same brain networks involved in using only visual or only auditory information, but also on brain regions associated with attention (left angular gyrus) and memory (hippocampus).
According to the authors, the findings support a dynamic vision of cross-modal interactions in which the areas involved in processing both face and voice information are not simply the final stage of a hierarchical model, but rather, they may work in parallel and influence each other.
The study has been published in the March 2011 issue of Elsevier’s Cortex. (ANI)
- Our brain can tell real face from imitations - Jan 10, 2012
- New study examines brain processes behind facial recognition - Apr 19, 2011
- Why you recognise someone, but can't name him - Aug 05, 2011
- Scientists shed light on neural basis of reading - Apr 28, 2009
- Blind can develop bat-like sonar to see - May 26, 2011
- Why do we fumble at times in recognising faces? - Jan 16, 2012
- Henry Higgins was born with ability to correct Fair Lady Eliza's accent - Mar 16, 2011
- Brain atrophy causes depression in multiple sclerosis patients - Jul 02, 2010
- Autistic brains 'focus more on visual skills' - Apr 05, 2011
- Multiple channels help brain avoid traffic overload - May 07, 2012
- Dead brain cells erode memory - Dec 29, 2011
- How Prozac alters brain plasticity - Mar 16, 2011
- New study may provide clues to treating a variety of mental disorders - Jan 26, 2011
- Brain's response to sadness can cause depression relapse - May 31, 2011
- Brain region that helps us understand written words identified - Dec 17, 2009
Tags: , , , , , , , , , , , , , , , , , , ,