Researchers studying wearable listening technology now have a new data set to use, thanks to CSL graduate student Ryan Corey and his team. Debuting at the International Conference on Acoustics, Speech, and Signal Processing (ICASSP) this week, the first-of-its-kind wearable microphone impulse response data set is invaluable to audio research for two reasons: First, the data includes up to 80 microphones instead of the usual two showing how is heard on different parts of the body, and second, the data is available for free under an open-access license.
“We believe hearing aids, smart headphones and all listening devices would work better if they had a lot of microphones, but most products only have two,” said Corey. “There isn’t data out there for more than that. Even the work that has been done with more didn’t include open-access data sets.”
The data set consists of more than 8,000 acoustic impulse responses measured at 80 different position on the body. The 80 microphones were tested on five different hat/headphone styles and with six different types of clothing. The sound in the recordings came from 24 different directions to simulate noisy crowds.
The group, including Corey’s adviser, CSL Professor Andrew Singer, and former undergraduate student Naoki Tsuda, spent weeks placing 80 microphones all over a mannequin and Corey himself in the CSL Augmented Listening Laboratory. They then recorded acoustic impulse responses to study the acoustics of the body and whether or not clothes plays a difference in how microphones pick up noise. The collected data is used by the team in the paper being presented at ICASSP this week, but they wanted the data to go farther.
“We’ve been frustrated when trying to use data sets that aren’t open,” said Corey. “Wearable arrays are important and more people should research it. Having this data out there will make it more convenient to do so.”
Future researchers can use the data to simulate wearable microphone arrays with different numbers of microphones at different points on the body. Many humans are already wearing multiple devices with microphones, and this data could help take advantage of that. Engineers can use it to design new products and study performance tradeoffs for different applications. A few of the potential applications for the data include augmented reality, speech recognition, and acoustic event detection, among others. Without the data set created by the CSL team, each researcher would have to build their own prototypes and test them, which is time-consuming and expensive.
The presentation takes place on Tuesday, May 14, in Brighton, UK. Singer, Fox Family Professor in Electrical and Computer Engineering, and Corey hope the presentation will raise awareness of the dataset, encourage others to use it, and give them to opportunity to receive feedback.
“This is the best-attended conference for audio signal processing, so I’ll be able to introduce the data set to a lot of researchers who could potentially take advantage of it, build on it, and give us feedback for future improvement,” said Corey.