Speaker Separation

Sound environments can be highly complex to analyse and separate a target signal from other signals or noises. In particular, it is extremely challenging to computationally separate a human speaker from another, due to the nature of the human speech signals being so similar to each other, speech being very random over time and other factors. This webinar dives into some of the latest research and technologies used in the area of speaker separation. 

Form: webinar

Date: 23 November 2022

Time: 15.00 – 16.30

Place: Online via Zoom

Price: free

Language: English

If you have questions about signing up, please contact Murielle De Smedt, mds@danishsound.org

Participant profile:

  • Researchers 
  • Acoustics engineers 
  • Acousticians 
  • Audiologists 
  • Technical Audiologists 
  • Machine learning/ AI Engineers 
  • DSP Engineers 
  • Audio/ Sound engineers 
  • Hearing aid users 
  • Audio Enthusiasts 

You will meet:

Program

 

We have all been in situations where there’s a lot of people speaking at the same time and we can hardly distinguish or understand one voice amongst all the others. Fortunately for us humans, our brain does most of the work and we can zoom in and out of a certain person’s voice with some ease, even in the most challenging sound situations. But how can we make such a technology that can do the same as we do? What are the best methods and algorithms we can come up with that will do such a job that could help e.g. hearing aid users? In this webinar, we have three marvelous guests that have been researching in this area, that will help us dive deeper into the research on speaker separation.  

Lars Bramsløw, Senior Scientist at Eriksholm Research Centre 

Lars is a research engineer and project manager with the Augmented Hearing research group at Eriksholm Research Center (part of the Demant group). Lars is known in the field for his ambitious, thorough approach and has worked in several advanced concepts and research topics over the years. Lars leverages his experience to manage collaborators and different projects done at Eriksholm. His dream and highest motivation is to make the world’s best hearing aid. 

Lars started in Eriksholm in 1991 where he did his PhD in sound quality in hearing aids, predicting and modelling sound quality in hearing aids. After that, he left the company for a few years to do other things on Danish radio among other ventures. In 2000, Lars came back to the Oticon headquarters, and worked with audiology development for hearing aids until 2012. After 2012, he returned to Eriksholm to do what he does today – more in-depth and independent research and concepts in the hearing field. 

Lars Bramsløw, Eriksholm Research Centre

Mads Græsbøll Christensen, Full Professor at Audio Analysis Lab, CREATE, Aalborg University 

M.Sc. (2002), Ph.D. (2005), and dr.techn. (2022) from Aalborg University in Denmark, Mads is Full Professor at CREATE in Audio Processing and is head and founder of the Audio Analysis Lab. He has held visiting positions at Philips Research Labs, ENST, UCSB, and Columbia University. He has published 4 books and more than 250 papers in peer-reviewed conference proceedings and journals and has given tutorial and keynote talks at international conferences. His research interests lie in audio and acoustic signal processing where he has worked on topics such as microphone arrays, noise reduction, signal modeling, speech analysis, and spatial audio. Dr. Christensen’s work has been recognized both nationally and internationally. He was a Villum Young Investigator and has received several awards for his work, including IEEE best paper awards, the Spar Nord Foundation’s Research Prize, a Danish Independent Research Council Young Researcher’s Award, the Statoil Prize, and the EURASIP Early Career Award.  

Mads Græsbøll Christensen, Aalborg University

Rasmus Høegh, PhD Student at WSAudiology and DTU 

Rasmus is a PhD student at WSAudiology and Technical University of Denmark (DTU). He is interested in probabilistic approaches to deep learning – especially deep generative models of sequential data and machine learning applied in health technology. Currently, his primarily researching how we can make audio models that generalize to real-world use scenarios. Specifically, Rasmus has been working on variational autoencoders and how probabilistic modelling enables us to, e.g., learn robust latent representations, incorporate prior knowledge, and utilize uncertainty quantification. 

Rasmus Høegh, WSA – DTU

Innovationskraft
When you participate in this event, your time will be used as co-financing for the Innovation Power Project, which is funded by the Danish Business Promotion Board and the Danish Ministry of Higher Education and Science at a standard rate. Read more about Innovationskraft  HERE

 

By signing up to this event, you automatically will receive the Danish Sound Cluster Newsletter

Danish Sound Cluster

Hold dig opdateret

Tilmeld vores nyhedsbrev: