ECE senior helps develop real-time spatial audio technology

5/26/2010 Megan Kelly, Coordinated Science Lab

ECE senior Ryan Rogowski and ECE Professor [profile:dl-jones] are taking video chat to the next level, developing 3D audio systems that can process real-time spatial audio.

Written by Megan Kelly, Coordinated Science Lab

The advent of Skype and other video chat applications enabled people to see friends and family in real-time while virtually communicating.

Ryan Rogowski
Ryan Rogowski

ECE senior Ryan Rogowski and ECE Professor Douglas L Jones are taking video chat to the next level, developing 3D audio systems that can process real-time spatial audio.

In other words, if you’re video chatting with a friend in New York while moving around your Champaign room, your friend will be able to hear your voice travel as though he was in the room with you.

These systems have the potential not only to enhance video chat applications but also to transform hearing-aid capabilities as well.

“It’s really cool technology,” Rogowski said. “I kept thinking, ‘How will this be possible?’ while working on it, but we did it.”

Last fall, Rogowski heard that Jones, who is a researcher in the Coordinated Science Lab, was working on 3D audio systems with real-time spatial audio. Interested, he asked Jones to be his senior thesis adviser, and the collaboration got under way. ECE graduate student Nam Nguyen and ECE technician Mark Smart assisted.

Rogowski and Jones recorded 3D audio using an array of four small microphones, an idea that differs from previous 3D recordings.

Douglas L Jones
Douglas L Jones

“In the past, recording 3D sounds required many expensive microphones, which were implemented on a variety of different systems, whether it be a video camera or hearing aid,” Rogowski said. “We took the 3D sound, recorded it, and reproduced it in a stereo headset.”

Rogowski added that they created surround sound in the headset by approximating “head-related transfer functions.” These functions describe the interaction among the head, inner ear, and pinna (ear-flaps) to derive audio information. Head-related transfer functions detect how input from sound waves is filtered and interpreted before reaching the eardrum and inner ear.

“It turns out by using head-related transfer functions, you can create the change in the sounds that your brain associates with directions,” Rogowski said. “We can manipulate sound so it seems like you’re hearing it in surround sound and not just from [the headset surrounding] your ears.”

In addition, Rogowski and Jones made the system record in real-time, so instead of recording and playing it back, they recorded and listened to it simultaneously.

“At the same time we’re recording in one room, we’d hook [the system] up in a different room and hear the sound move around there,” Rogowski said. “It seemed as if the people we were recording were in the same room as us.”

Rogowski said the systems could be used in video chat and hearing aid applications, among others.

“Currently, if you use a hearing aid, you may only hear background noise without spatial direction,” Rogowski explained. “For example if someone was walking by, a person with a hearing aid might not be able to tell which direction they were coming from unless they were watching the person. This technology could change all that.”

Rogowski said the next step is to find interested businesses and expand on what they’ve already accomplished. They demonstrated the system to several visiting companies during the spring semester.

“There’s a little more work to be done in improving sound localization, like how well you can differentiate where the sound comes from,” Rogowski said.

Now that graduation ceremonies are over, Rogowski will spend a year studying Mandarin in China on a Boren Scholarship. He plans to attend graduate school and build upon his research with Jones. 


Share this story

This story was published May 26, 2010.