Multisensory Processing

7 december, 2021 @ 15.00 - 16.30

Learn about multisensory research & development going on in the world’s top institutions. If you are interested in, or already working with sensors, haptic technology, or new solutions for hearing-impaired listeners, this is a webinar you do not want to miss out on! 

Form: webinar

Date: 7 december 2021

Time: 15.00 – 16.30

Place: Online via Zoom

Price: free

Language: English

If you have questions about signing up, please contact Murielle De Smedt,

Participant profile:

  • Researchers working with acoustics, psychoacoustics, sensors 
  • Those working in product development
  • Those working on solutions for the hearing-impaired
  • The presentations will be technical, but most parts will be accessible to everyone.
You will meet:



Claire Richards

Halfway through her doctoral thesis, Claire will discuss the project’s inspiration, her recent discoveries in extra-tympanic hearing and multimodal integration, and the “research by design” context of her work. Her research is supported by two laboratories (IRCAM STMS Lab and the Design Research Center), along with her industrial partner Actronika.

She will present an overview of three audio-haptic devices, designed during her PhD for both experimental and product-based use contexts.

Claire is pursuing her PhD at the crossroads between design and perceptive sciences at IRCAM in Paris, France. Equipped with her own hearing handicap and training in ergonomics, she applies a human-scaled perspective to her research, focused primarily on the audio-haptic experience of sound. Through the creation of multimodal wearable devices, she hopes to expand the horizons of what it can mean to hear.

Jeremy Morazeau

We perceive the world through all our senses. For example, the experience of drinking a good wine involves our sense of taste, smell, vision, and touch. Similarly, we do not only hear music, but we also feel it through vibrations. We can easily forget it because our sense of hearing is more adapted to transmit acoustic waves than our sense of touch. However, someone with a hearing loss will rely more heavily on this other sense to compensate for his/her deficit.

The main treatment to restore sound perception is either amplifying the sound (the hearing aid) or directly stimulate the auditory nerve (the cochlear implant). However, combining audio and tactile stimulation will allow us to use the body’s full potential and maximize the amount of sound we can restore.

In this talk, I will discuss the different studies from our lab dedicated to restoring the music perception of hearing-impaired listeners through the tactile modality.

Jeremy Marozeau is the leader of “Music and Cochlear Lab” within the Department of Health Technology of DTU. He received a BEng degree in 1999 in Microtechnology Engineering from Swiss Institute of Technology (EPFL), an MSc degree in 2000 in Acoustics and Signal Processing Applied to Music and a Ph.D. in 2004 from the Institute for Research and Coordination Acoustic Music (IRCAM) and the University of Paris VI. After working at the French National Center for Scientific Research (CNRS, Marseille) on modeling the loudness of impulsive sounds, he continued his research on loudness as a Research Associate at Northeastern University, Boston, in 2005. Three years later, he worked as a senior researcher at the Bionics Institute in Melbourne to improve music perception for cochlear implants users. In 2014, he joined the Hearing Systems Group at DTU as an Associate Professor. Jeremy’s research specialization is focused on music perception in cochlear implant recipients.


Cumhur Erkut

In the Multisensory Experience Lab we investigate the combination of different input and output modalities in interactive applications. We are interested in both development of novel hardware and software technologies as well as evaluation of user experience. We apply our technologies in a variety of areas such as health, rehabilitation, education, art and entertainment. We are particularly interested in researching topics related to sonic interaction design for multimodal environments, simulating walking experiences, sound rendering and spatialization, haptic interfaces, cinematic VR and evaluation of user experience in multimodal environments.

MEL traditionally relies on physics-based audio and haptic models for multisensory processing. Currently we focus on physics-based deep learning and differentiable programming for constructing models or estimating the model parameters. We advocate that this interpretable and explainable approach to machine learning will in time solve the fundamental problems of both deep learning and multisensory processing, by highlighting reasoning instead of mere mapping, and focusing on optimizing well-understood classical signal processing algorithms by inductive, structural biases instead of a generic black-box approach.


Cumhur Erkut (M.Sc. 1997, PhD 2002) has received a PhD in acoustics and audio signal processing from Helsinki University of Technology, Finland, with minor studies on Information Systems (Machine Learning). During his post-doctoral period, he has contributed to national and international projects (EU FP5-6). Between 2007 and 2012, he has conducted independent research as an Academy Research Fellow on sonic interaction design (Schema-SID, #120583, Academy of Finland), together with the FP7 IC601 SID project on the same theme. Since 2013, Dr. Erkut consolidates his expertise in sonic and embodied interaction design at the Multisenspory Experience Lab of Aalborg University Copenhagen, as an Associate Professor, by combining the theory and the methods of human computer interaction, audio signal processing, and physics-based deep learning.

Når du deltager i dette event vil din tid blive anvendt som medfinansiering på Projektet Innovationskraft, som er finansieret af Danmarks Erhvervsfremmebestyrelse og Uddannelses- og Forskningsstyrelse
n til standardsats. Læs mere om Innovationskraft

Danish Sound Cluster

Hold dig opdateret

Tilmeld vores nyhedsbrev: