Future Sound Forum

4th May 2022 12:00 -18:00

Aalborg Universitet i København

A.C. Meyers Vænge 15, 2450 København 

Join this event together with professionals and enthusiasts working within hearing aids, speech enhancement, headsets, wireless devices, data privacy, VR/XR and related fields. We are setting the scene for tech people, entrepreneurs, researchers, directors and students to come together, learn and discuss new tendencies in audio tech. 

The program is loaded with exciting talks, panel debates and discussions with experts from the research, business and tech worlds. You will also experience a selection of demos and game experiences from some of our participants and from the Multisensory Experience Lab at Aalborg University.  

There will also be plenty of time for networking. We promise you that you won’t leave empty-handed from this event! 

Ticket

DKK 600 (inc. 25% VAT)
Still available
Member ticket

DKK 300 (inc. 25% VAT)
Still available
Students

FREE
Still available

  Member? Check our memberlist here

Note: Your order confirmation is your entry ticket! 

Tickets are not refundable

Programme

Scroll down for more information on the presentations, speakers and demonstrations.

12:00

Registration & Lunch

Pick up your nametag, grab a sandwich and catch up with your fellow audio friends.

12:50

Welcome

Welcome from Stefania Serafin, professor and head of the Multisensory Experience Lab at Aalborg University in Copenhagen. Our director Torben Vilsgaard will update you on our work at the Danish Sound Cluster and our upcoming plans.

13:00

Machine learning for audio-visual speech enhancement or reconstructionDaniel Michelsanti (Oticon, CASPR, AAU)

13:30

Cochlear Implants & Privacy including synthesized data from models of real usersNiels Pontoppidan (Eriksholm)

14:00

Coffee & Cake

14:25

Presentation of the Danish Sound Cluster’s “Sound Quality in Digital Meetings” Paper – see below for more info

15:00

Presentation & Panel Discussion: Privacy & AudioJonas Lindstrøm (Alexandra Institute), Sune Hannibal Holm, (Sektion for forbrug, bioetik og regulering, Copenhagen University),  Torben Christiansen (EPOS), Nick Dunkerley (Hindenburg)

15:30

Time for Demos

16:00

16:30

The bar is open - Demos and Games

Refreshments, games and demos including some of the students from the Multisensory Experience Lab here at Aalborg University. See below for more information about the fantastic demos!

18:00

Tak for i dag!

Talks & Panels

Machine learning for audio-visual speech enhancement or reconstruction     (13:00)

Deep learning techniques have been successfully applied to a wide range of problems. This presentation will consider the use of deep learning for audio-visual speech enhancement applications, with a specific focus on hearing aid systems. Specifically, after a general introduction of deep learning for supervised learning problems, we will see how deep learning can be used to fuse multi-modal data to reduce the background noise of a speech signal, i.e., the task of audio-visual speech enhancement. The talk will include some audio demos to give the audience a glimpse of the capabilities of the recent advances in the field.

Cochlear Implants & Privacy including synthesized data from models of real users       (13:30)

A significant part of recent audio research and audio product development involves sound from peoples’ everyday life, and this causes justified concerns about privacy. Cochlear implants’ research provides inspiration for designing audio sampling schemes that maintain privacy, in fact the question is turned upside down, so that the number of frequency bands are fewer, and the sampling frequency slower than that which is required to understand speech and identify persons. Another option for preserving privacy is through data synthesizers. A data synthesizer is a statistical model inferring its parameters from raw data, and then synthesizer anonymous data by drawing random samples from the model. The talk will feature examples of using subsampling and data synthesis to ensure privacy.

Sound Quality in Digital Meetings Paper     (14:25)

Our “Future Sound Solutions” working group has prepared a Sound Quality in Digital Meetings paper, which will be presented for the first time at this event. 

This paper will be presented by the chairman of the board Birger Schneider, Clement Laroche (Jabra), and Tore Stegenborg-Andersen (SenseLab, FORCE Technology), Torben Christiansen (EPOS)

Birger Schneider,
Chairman of the board,
Danish Sound Cluster

Tore Stegenborg-Andersen
SenseLab,
FORCE Technology

Torben Christiansen, Director of Technology, EPOS

Panel: Privacy & Audio (15:00)

Privacy considerations prevent a lot of data from being analysed and used, but new technologies are being developed which allows sensitive data to be analysed without compromising privacy. Based on concrete cases and needs from the industry, Jonas Lindstrøm from the Alexandra Institute will give an introduction to these technologies and their potentials.

Jonas has a PhD in mathematics and works as a researcher and consultant to spread the knowledge on privacy-preserving technologies and their applications.

After this short introduction, we will have a discussion with other industry experts. 

Jonas Lindstrøm,
Alexandra Institute

Sune Hannibal Holm, Sektion for Forbrug, Bioetik og Regulering, Copenhagen University

Torben Christiansen, Director of Technology, EPOS

Nick Dunkerley, Creative Director, Hindenburg

Panel: The Sound of the Metaverse     (16:00)

Immersive audio is exploding, Facebook is now Meta, our avatars are iminent. How will audio technology be integrated in these virtual universes in the future? How can we ensure that the quality is as good as it should be, and not just an afterthought? Which technologies are needed to ensure an optimal audio experience?  We have invited some of Denmark’s finest talent within VR and spatial audio to discuss this exciting subject!

Demos & Games

Patch XR

PatchWorld is an open-ended world-builder. Dig into a complete collection of interchangeable blocks that power gesture and interaction, generative music machines and pattern makers, sound processing and stompbox-style effects, animation and visuals. Or if you just want to relax and play around, enter our growing library and load up user content or an EP – audiovisual albums you can enter interactively.

The Jellyfish is a new collaborative work by Mélodie Mousset (HanaHana) and Edo Fouilloux that invites audiences to dive into the deep water of their consciousness in a mesmerizing, interactive virtual reality soundscape.

In a dream-like state underwater, visitors in a VR headset encounter ghostly marine creatures, glowing jellyfishes, beckoning for participants to sing through them.

Bouncy Beats is a virtual music production universe where you can both create, perform and record a track of your own. 

Vibrating Concert Furniture

Razvan Paisa is a PhD student at Aalborg University’s Multisensory Experience lab. He will present his vibrating concert furniture for cochlear implant users. 

 

The Tickle Tuner

Francesco Ganis will present his Masters thesis project: A haptic device used for music training in cochlear implant users. The project was made in collaboration with Oticon Medical.

 

Duo Rhythmo

DuoRhythmo enables people living with motor disabilities to create music collaboratively and remotely in real-time. We created DuoRhythmo to empower everyone with different musical and physical abilities while jamming together.”

The project is created by Lilla Tóth, Marcus Dyrholm, Scott Naylor, Christian Tsalidis, Balazs Ivanyi and Truls Tjemsland

Learn more about the project here.

MuCIcal Adventures

Many cochlear implant users describe music as being noisy, distracting and unpleasant. However, various research projects have shown the capabilities in improving the listening experience through music training programs. Previous training programs have been developed in app and DVD-formats, however, exploring the usage of virtual reality as a training program has yet to be explored. 

‘MuCIcal Adventures’ is a virtual reality game with the purpose of gradually improving cochlear implant users abilities in localization of sound, musical instrument identification and melody recognition. 

Created by Signe Kroman and Emil Sønderskov Hansen

Lyt Igen

Carl Hanefeld-Møller Hutters and Daniel Reipur are research assistants at Aalborg Univesity’s Multisensory Experience Lab and will present two projects. Lyt Igen: An interactive virtual reality application developed for young people with hearing loss to train their hearing and other skills related to their condition. This project was made in collaboration with The Velux Foundations, Decibel, and Rigshospitalet’s Center for Hørelse og Balance. Interactive Music Museum: A museum in virtual reality allowing users to listen to and play on various different instruments.

I samarbejde med

By participating in this event, you automatically become a subscriber to our newsletter mailling list. You can unsubscribe at any time. 

You also give Danish Sound Cluster permittance to use pictures and videos from the event in marketing material that promotes our activities. Please contact Murielle De Smedt (mds@danishsound.org) in case you want to withdraw this permittance.

This event was created in collaboration with IDA Fremtidsteknologi. The participant list of this event will be shared with IDA for statistical use only. 

Innovationskraft
When you participate in this event, your time will be used as co-financing for the project Innovationskraft, which is funded by Danmarks Erhvervsfremmebestyrelse and Uddannelses- og Forskningsstyrelsen at a standard rate. Read more about Innovationskraft 

Danish Sound Cluster

Hold dig opdateret

Tilmeld vores nyhedsbrev: