Making hearing aids better

5/22/2009 Laurel Bollinger, ECE ILLINOIS

“People do not like to talk about it that much, but hearing aids do not work very well,” said ECE Associate Professor Jont Allen. “It’s not like they’re useless, they just don’t work like real ears.”

Written by Laurel Bollinger, ECE ILLINOIS

 

Jont Allen
Jont Allen

“People do not like to talk about it that much, but hearing aids do not work very well,” said ECE Associate Professor Jont Allen. “It’s not like they’re useless, they just don’t work like real ears.”

According to Allen, this is the most prevalent issue facing hearing aid companies and hearing impaired individuals. Having devoted the past 15 years to solving this problem, and Allen and his team of students have come closer to a solution over the past year.

Allen began by working on how the ear works. This research slowly led him in a different direction, specifically, the application of the information involved with hearing aid development. Since joining Illinois in 2002, Allen has benefitted greatly from having students to help him with his theory and research.

For a long time, Allen has wanted to solve the problem of how humans decode speech sound. Until he started, there really was no research or theory behind it. He was treading into uncharted waters. And what he discovered was not what was expected.

“We found is that there are some spots in speech that are critically important and some spots, actually huge areas, that are completely irrelevant.” said Allen. “The spots that are critically important are always onset transients, or those little bursts of energy in speech. Different sounds are characterized by different patterns of onset transients. It is those various little pieces of onset transients that define what you hear.”

A hearing-impaired listener has difficulty with a certain sounds simply because the correlated speech features are inaudible due to the hearing loss. In an extreme case, a cochlear dead region may block the speech feature totally within a frequency range. For example, one research subject had a cochlear dead region around 2,000 hertz in her left ear. As a consequence, she cannot hear /ka/ and /ga/, which are characterized by two features at the same frequency. In contrast, her right ear, without cochlear dead regions, can hear the two sounds.

Feipeng Li, one of Allen’s graduate student, has been working extensively with this patient, and he says that improved understanding of such dead regions will help shape the future of hearing aids, making them more useful and successful at recovering hearing loss.

 

Feipeng Li
Feipeng Li

 

“If you want to make a better hearing aid, you need to know what the problem is. Right now people don’t know what’s wrong here,” says Li. “Instead of boosting features of the inaudible sounds, state-of-the-art hearing aids amplify everything, noise as well as signal, without taking into account whether the listener has difficulty with the inputting sounds or not. The situation gets much worse under noisy conditions.”

Allen and Li suggest a solution is in the works. “The short version of the plan is to change the hearing tests,” said Allen. “Right now for a hearing test, they play tones in your ear and you say, ‘Yes, I can hear it’ or ‘No, I can’t.’ Well that turns out not to be very effective. It doesn’t tell you where your dead regions are. And it doesn’t tell you if you can hear speech transients.”

Allen says the new type of test will use nonsense speech sounds in order to diagnose what transients and frequency bursts people cannot hear. They administered a massive data collection test that collected around 1000 hours of data from almost 100 subjects who were listening to 20 different talkers with noise-masked speech. Allen was pleased to see that the features that they were looking for were found. They now have an understanding of the basic science behind the speech perception problem. In cases such as that of their subject with a dead region, they now may have the next step in their sights.

“With all of the testing we’ve done, we’ve discovered that this is the best thing to do,” said Allen. “We plan to test specifically with various speech sounds to better diagnose the ear. It needs to be done on an ear-by-ear basis. So this woman in our test, her one ear is very different than her other ear. All of the people where we’ve tested both ears, there’s significant differences, which is kind of amazing.”

Allen is excited that this new diagnostic will make it easier to attack the particular problems and allow for a more direct diagnosing and hopefully provide more concrete solutions to hearing loss problems.

Allen spent 32 years at Bell Laboratories before joining the University of Illinois in 2002. He is the recipient of the IBM Faculty Award and the IEEE Third Millennium Award. He is a Fellow of IEEE and the Acoustical Society of America. In 1994 he and his wife, Pat Jeng, founded Mimosa Acoustics, a company that diagnoses middle ear problems.

Feipeng Li is a PhD student in ECE. He received his bachelor’s and master’s degrees from Wuhan University, China, in 1996 and 1999, respectively. His interest is in signal process for hearing aids and cochlear implants. He is the recipient of the 2009 Sundaram Seshu International Student Fellowship from the ECE Department.


Share this story

This story was published May 22, 2009.