Features

Selective Hearing

Brain-mapping sheds light on "cocktail party effect."

View Comments (0)Print ArticleEmail Article
Section Sponsored by:
http://thickit.com/professionals/request-a-sample-professionals/procare/

Scientists at the University of California-San Francisco (UCSF) have shed light on the workings of selective hearing - how people can tune in to a single speaker while tuning out their crowded, noisy environs (Nature, April 19, 2012).

Psychologists have known for decades about the so-called "cocktail party effect," a name that evokes the Mad Men era in which it was coined. It is the remarkable human ability to focus on a single speaker in virtually any environment - a classroom, sporting event or coffee bar - even if that person's voice is seemingly drowned out by a jabbering crowd. 

To understand how selective hearing works in the brain, UCSF neurosurgeon Edward Chang, MD, of the Keck Center for Integrative Neuroscience, and postdoctoral fellow Nima Mesgarani, PhD, worked with three patients who were undergoing brain surgery for severe epilepsy. Part of this surgery involves pinpointing the parts of the brain responsible for disabling seizures. The epilepsy team finds those locales by mapping brain activity over a week by placing a thin sheet of up to 256 electrodes on the cortex. The electrodes record activity in the temporal lobe, which is home to the auditory cortex.

UCSF is one of few leading academic epilepsy centers where these advanced intracranial recordings are done. The ability to record safely from the brain itself provides unique opportunities to advance the fundamental knowledge of how the brain works.

"The combination of high-resolution brain recordings and powerful decoding algorithms opens a window into the subjective experience of the mind that we've never seen before," said Dr. Chang, who is also co-director of the Center for Neural Engineering and Prostheses at UC-Berkeley and UCSF. 

During the experiments, patients listened to two speech samples played simultaneously in which different phrases were spoken by different speakers. They were asked to identify the words they heard spoken by one of the two speakers.

The researchers then applied new decoding methods to "reconstruct" what the subjects heard from analyzing their brain activity patterns. They found that neural responses in the auditory cortex only reflected those of the targeted speaker. The decoding algorithm was able to predict which speaker and even what specific words the subject was listening to based on the neural patterns.

"The algorithm worked so well that we could predict not only the correct responses but also even when they paid attention to the wrong word," Dr. Chang said. 

The new findings show that the representation of speech in the cortex does not just reflect the entire external acoustic environment but instead just what we really want or need to hear. They represent a major advance in understanding how the human brain processes language, with immediate implications for the study of impairment during aging, attention deficit disorder, autism and language learning disorders.

In addition, we may someday be able to use this technology for neuroprosthetic devices for decoding the intentions and thoughts from paralyzed patients that cannot communicate, Dr. Chang said.

Revealing how the brain is wired to favor some auditory cues over others may even inspire new approaches toward automating and improving how voice-activated electronic interfaces filter sounds in order to properly detect verbal commands.

The research was funded by the National Institutes of Health and the Ester A. and Joseph Klingenstein Foundation.




     

Email: *

Email, first name, comment and security code are required fields; all other fields are optional. With the exception of email, any information you provide will be displayed with your comment.

First * Last
Name:
Title Field Facility
Work:
City State
Location:

Comments: *
To prevent comment spam, please type the code you see below into the code field before submitting your comment. If you cannot read the numbers in the below image, reload the page to generate a new one.

Captcha
Enter the security code below: *

Fields marked with an * are required.

View the Latest from ADVANCE

 

Search Jobs

Go
 
 
 
http://speech-language-pathology-audiology.advanceweb.com/Pediatrics/default.aspx
http://speech-language-pathology-audiology.advanceweb.com/Webinar/Editorial-Webinars/ADVANCE-Speech-Language-Pathologists-and-Audiologists-Webinars.aspx
http://www.advanceweb.com/jobs/healthcare/index.html