Hearing aid speech enhancement processing with ML and AI

Date: 5 September 2023

Time: 11.50 – 12.55 

Place: Aalborg Univeristy in Copenhagen, A.C. Meyers Vænge 15, 2450 Copenhagen

Price: free entrance September 5th for Danish Sound Cluster members – send an e-mail to Stefania Serafin (sts@create.aau.dk)

Language: English

This annual conference brings together research practitioners across the globe working with digital audio processing for music and speech, sound art, acoustics and related applications.

Join us at DAFX 2023 for two interesting talks: 

Niels Pontoppidan Principal Scientist at Eriksholm Research Center

Niels Pontoppidan manages research for Oticon at Eriksholm Research Center. His research areas combine AI for finding the optimal individual settings for varying intents and sound scenes as well as enhancing voices processing with AI. He has a PhD from DTU Compute in 2005 and started on voice separation with machine learning for the master’s project in 2001.

 

Advances, barriers, and future direction for hearing aid effects

During the last 20 years advanced applications for hearing instruments only possible with machine learning (ML) emerged. However, for many and for long the computational complexity did not allow for actual implementation. Nevertheless, in 2020 core signal processing based on ML principles came in use for enhancing speech in the presence of noise. It is interesting to look back at the interplay of applications, algorithms, connectivity, and hardware to speculate about the next core signal processing areas of hearing instrument that ML will enhance.

Niels Pontoppidan, Eriksholm Research Center

Clément Laroche, Senior Research Scientist at Jabra

Clément Laroche is a senior research scientist is the Audio Research group at Jabra. His goal is to improve the voice pickup quality by combining machine learning and traditional DSP algorithms. He graduated from Télécom Paris in 2017 with a PhD in computer science, signal processing and machine learning.

With the rapid evolution of artificial intelligence and deep learning algorithms, it’s essential to discern their transformative impacts on telecommunication devices’ audio performance, specifically headsets, speakerphones, and videobars. This presentation will start by delineating the challenges faced by conventional signal processing techniques in the current digital age, such as the inability to effectively filter ambient noises in varying environments or adapt to different speech characteristics.

We then explore how deep learning-based can aid in overcoming these challenges by learning complex, non-linear relationships from vast audio data. In addition to the technical aspect, we believe in the invaluable role of human listeners in validating our models. Hence, we will share results from a study, where a diverse crowd sourced the rating of audio quality. The insights gathered from these human ratings provided a more nuanced understanding of the perceived audio quality, underlining the importance of a human-centric approach in our technical advancements.

Clément Laroche, GN Audio

This talk is part of the 4-days international conference on digital Audio effects “DAFX23 Copenhagen” 4th – 7th September

Innovationskraft
When you participate in this event, your time will be used as co-financing for the project Innovationskraft, which is funded by the Danish Business Promotion Board and the Danish Ministry of Higher Education and Science at a standard rate. Read more about Innovationskraft 

Danish Sound Cluster

Hold dig opdateret

Tilmeld vores nyhedsbrev: