Audio Signal Processing Using AI

3 november 2021 @ 15.00 - 16.30

During this webinar, you will hear from 3 highly experienced and skilled members of the Danish audio sector, who are working at 3 of the fastest-growing companies: Oticon (hearing aids), EPOS (headsets for gaming and enterprise, speakerphones, sound cards and streaming microphones) and Jabra (headsets and speakerphones).

Photo: Umberto – Unsplash

Form: webinar

Date: 3 november 2021

Time: 15.00 – 16.30

Place: Online via Zoom

Price: free

Language: English

If you have questions about signing up, please contact Murielle De Smedt, mds@danishsound.org

Participant profile:

  • Audio DSP engineers
  • Those working with AI or machine learning
  • Product Engineers and Designers
  • Anyone wanting to learn more about AI in audio applications
You will meet:

Program:

 
 

Torben Christiansen

A brief introduction to AI in audio. Exemplified with a case of a product with embedded AI in the market. Why, what and how will be addressed. 

Torben Christiansen, Director of Technology at EPOS. He is responsible for and managing Technology development in EPOS with a special attention on leveraging research and technology from other companies in the Demant group.

 

Niels Pontoppidan

Separating voices with AI for super low-power platforms 

 

Hearing technology made significant progress during the last decade improving the processing of competing voices, yet closely spaced voices remain a challenge. Single channel separation concepts powered by AI have reached a maturity that provides benefits for closely spaced voices while integration in products seems within reach.

Niels Pontoppidan manages research for Oticon at Eriksholm Research Center. His research areas combine AI for finding the optimal individual settings for varying intents and sound scenes as well as enhancing voices processing with AI. He has a PhD from DTU Compute in 2005 and started on voice separation with machine learning for the master’s project in 2001.

 

Clément Laroche

An investigation of data driven algorithm for speech enhancement 

The use of deep learning algorithms in speech enhancement considerably improved the quality of the processed signals. These methods consist in learning the models from training data. In noisy environment, the neural networks can predict speech probability with very high accuracy thus replacing methods derived from expert knowledge. However, there is still no reliable objective measure to train and evaluate these networks and these supervised approaches see their performance greatly reduced under realistic conditions, which are not known to the systems during their training (non-stationary noise, reverberation, diversity of sensors, etc.).

Clément Laroche is a senior research scientist is the Audio Research group at Jabra. His goal is to improve the voice pickup quality by combining machine learning and traditional DSP algorithms. He graduated from Télécom Paris in 2017 with a PhD in computer science, signal processing and machine learning.

Innovationskraft
Når du deltager i dette event vil din tid blive anvendt som medfinansiering på Projektet Innovationskraft, som er finansieret af Danmarks Erhvervsfremmebestyrelse og Uddannelses- og Forskningsstyrelse
n til standardsats. Læs mere om Innovationskraft

Danish Sound Cluster

Hold dig opdateret

Tilmeld vores nyhedsbrev: