Algorithms have come a long way from when you learned them in algebra. (Remember FOIL? First, Outside, Inside, Last? Don’t sweat it–we had to look it up, too.) But it’s true—one of those classroom tasks that probably had you thinking, “When will I ever use this in real life?” has turned out to be one of the big buzzwords of the 21st century.
At heart, algorithms are simple. Each one is just a set of directions to be followed in a sequence (FOIL is a perfect example since it’s literally just telling you the order you need to work in to solve an equation). Computers’ massive amounts of processing power can contend with algorithms that are immensely complex–even if it’s following directions, it’s following directions on a scale our conscious minds can’t even begin to comprehend. Those kinds of algorithms are what Silicon Valley types love to hype and how services like Netflix or Spotify predict what you will (or won’t) enjoy watching or hearing.
And speaking of hearing–which is why we’re here, after all—a Ph.D. student at the Centre for Acoustic Signal Processing Research (CASPR) at Denmark’s Aalborg University recently developed an algorithm that has the potential to dramatically improve hearing aids’ performance. For his doctoral thesis, student Morten Kolbæk worked on alleviating problems in two scenarios that are all too familiar for many hearing aid users. The first was improving speech recognition in a one-on-one conversation where there’s background noise. The second scenario is similar but takes it up a notch—separating conversational speech from background noise that includes other people talking.
If you’ve ever struggled to hear someone you’re chatting with at a restaurant, a party, or even in a car (forget the radio, just traffic noise can be enough), you understand the problems that Kolbæk’s research endeavored to crack. The human brain is amazing at separating voices from one another–better than even a computer. (That’s why, for example, transcription software that can easily and accurately turn a monologue into type makes an absolute mess of a one-on-one interview.) But when you’re starting out with hearing aids, you can find yourself feeling less like a person and more like a computer—even if sounds are more audible, it may be difficult to tell them apart.
Kolbæk was able to develop algorithms that not only enable computers to amplify sound selectively but to do it (so to speak) on the fly. Generally, computers–and that includes some smart hearing aids, which are programmable–are able to distinguish voices when they are in a “known” sound environment or scenario. What Kolbæk sought to do was to come up with a more advanced algorithm that is adaptive, so that even in a totally unfamiliar sonic environment, a computer could automatically adjust and focus on one voice and not others.
The only bad news? For now, this particular algorithm requires so much computing power that it can’t be used in a hearing aid–the device would be too big. But along with colleagues at CASPR, Kolbæk is working on it. (They’re also working on better establishing how, in a noisy situation, a computer could “know” which voice to amplify.)
And there’s more good news: It’s not the only algorithm out there. Some of the newest smart hearing aids are taking advantage of massive, anonymized data collection and similar kinds of machine learning to help improve hearing aids’ performance. Want to learn more about the different types of hearing aids that are available today? Connect with a hearing specialist in your area.