Novel software scans listeners brains to discern who said whatNovember 14th, 2008 - 2:07 pm ICT by ANI
London, November 14 (ANI): Dutch researchers have developed a mind-reading software program that can scan a brain to decipher the sounds spoken to a person, and even who is saying them.
Neuroscientists at Maastricht University in The Netherlands say that they trained the software by using functional magnetic resonance imaging (fMRI) to track the brain activity of seven people, while they heard three different speakers saying simple vowel sounds.
Telling about their observations, the researchers said that each speaker and each sound created a distinctive “neural fingerprint” in a listener’’s auditory cortex, the brain region that deals with hearing.
The team said that they utilised those fingerprints to create rules that could decode future activity, and determine both who was being listened to, and what they were saying.
“We have (created) a sort of speech-recognition device which is completely based on the brain activity of the listener,” New Scientist magazine quoted Elia Formisano of Maastricht University, who led the group, as saying.
The researchers are of the opinion that recent advances in fMRIs application may someday be extended to identifying what a person is looking at from their brain activity.
“This is the first study in which we can really distinguish two human voices, or two specific sounds,” Formisano said.
The researcher said that the study also helped the group make new discoveries about how the brain processes speech.
The team observed that sounds made by any person trigger the same voice fingerprint in the brain, and similarly a given vowel sound produces the same fingerprint, independent of the speaker.
Formisano says that such fingerprint help the software recognise combinations of speakers and sounds that it has not encountered before.
It should be possible to teach the system how to recognise all the component sounds of speech, and then recognise full words, the researcher adds.
“Vowel sounds are not meaningful, but they are language. These are the building blocks of language,” he says.
The researchers are presently trying to develop a system that will be capable of recognising more complex sounds without training, even in noisy environments.
A research article on this work has been published in the journal Science. (ANI)
- Brain wiring lets us differentiate our speech from that of others' - Dec 10, 2010
- Software turns spoken English into 26 languages - Mar 15, 2012
- Mom's voice activates newborn's language learning - Dec 20, 2010
- Mother's voice really is special to babies - Dec 16, 2010
- Little babies can make out when you're sad - Jul 01, 2011
- Recreating planetary sounds from Mars, Venus - Apr 03, 2012
- Infants can tell human from non-human sounds: Study - Jul 18, 2012
- All languages are created in the same brain areas - Feb 27, 2010
- It's easier to understand others if you imitate their accents - Dec 07, 2010
- How we combine faces and voices to recognise people - Mar 10, 2011
- What makes some musical tunes cheerful and some sad? - Jan 12, 2010
- Women's voices remain steady throughout menstrual cycle - Apr 12, 2011
- Accents are a real turn-off for Scottish listeners - Nov 19, 2010
- How music training boosts learning - Jul 21, 2010
- New 'brain-reading' method can uncover a person's mental state - Jul 28, 2009
Tags: auditory cortex, brain activity, brain processes speech, brain region, dutch researchers, formisano, functional magnetic resonance, functional magnetic resonance imaging, human voices, maastricht university, magnetic resonance imaging, mind reading, new discoveries, new scientist magazine, novel software, reading software, university in the netherlands, using functional magnetic resonance imaging, vowel sound, vowel sounds