The Audio Definition Model and Delivery of Next Generation Audio

Join our panel of experts from the BBC, Dolby and Fraunhofer IIS as they discuss the development of ADM, new audio codecs (MPEG-H 3D Audio, AC-4, DTS-UHD) and the possibilities of next generation audio. We will also take a look at some tools used in immersive and personalized post-production.

Form: webinar

Date: 8 February 2022

Time: 15.00 – 16.30

Place: Online via Zoom

Price: free

Language: English

If you have questions about signing up, please contact Murielle De Smedt, mds@danishsound.org

You will learn:

  • The history of ADM development
  • NGA formats
  • Tools used in NGA delivery
You will meet:

Program

 

Matt Firth – the history of the development of the Audio Definition Model.

Matt Firth is a Project R&D Engineer and leader of the Production workstream in the Audio Team at BBC Research and Development. His work with the BBC has involved developing audio production tools, including spatial audio tools for live binaural production at scale for the BBC Proms. The team at BBC R&D also has a heavy focus on Next-Generation Audio. He was the developer of the production tool used for the ORPHEUS project which developed an end-to-end object-based media chain for audio content. He is also part of the development team behind the EAR Production Suite which facilitates NGA production using the Audio Definition Model (ADM).

Next-Generation Audio (NGA) provides the means to adapt content to the preferences of the end user and their playback system, delivering a more personalised, accessible, and immersive listening experience. BBC R&D, along with other industry partners, have worked towards standardising the Audio Definition Model (ADM) to describe NGA content. This talk will discuss ADM and how it fits in to the production chain, and discuss the new release of the EAR Production Suite which has been developed to author these experiences.

 

David Ziegler & James Cowdery, Dolby

Next Generation Audio provides means to improve accessibility to content for everybody as well as give consumers more flexibility to personalize their individual listening experience. With Dolby’s latest audio codec AC-4, those NGA features can be deployed to consumer devices in living rooms, today.

James Cowdery will introduce Dolby AC-4 and discuss its relationship with Dolby Atmos and NGA. James will also discuss how live production workflows are expected to support NGA through adoption of Serial ADM. One essential piece of Next Generation Audio is an immersive 3-dimensional audio (re)production, as consumers can for example enjoy with the Dolby Atmos experience.

Dolby Atmos is brought to consumers by major streaming platforms (like Netflix and Disney+) as well as by broadcasters in recent Next Generation Audio trials.

David Ziegler will show the integration of Dolby Atmos into industry standard audio workstations like AVID Pro Tools and Apple Logic, and how ADM.wav is already used by the media industry for storage and delivery of Dolby Atmos content, today.

James Cowdery works as a Senior Staff Architect for Dolby. Based in Portland, Maine USA, his current area of focus is IP transport of next generation audio and metadata. James has a background in Embedded Digital Signal Processing Software and studied Information Systems Engineering at The University of Surrey in the UK.

David Ziegler works as Senior Content Engineer for Dolby. Based in Berlin, Germany he supports content creators like sound studios, producers and mix-engineers in Germany, Benelux and Scandinavia in their work with Dolby Atmos for Film, TV and Music. David studied Diplom Tonmeister (Sound Engineer) at Filmuniversity Babelsberg and worked as sound designer and re-recording mixer on various TV and feature film productions before joining Dolby 2012.

Fraunhofer’s MPEG-H Audio System

Fraunhofer has developed the MPEG-H Audio system, based on an open international standard: ISO/IEC 23008-3, to offer an enhanced sound experience for broadcast and new media services such as UHDTV, immersive music services, 4K video streaming or Virtual Reality. Already adopted by all major broadcast standards such as ATSC 3.0 and DVB, the MPEG-H Audio system has been selected most recently as the only audio system for next generation broadcast system in Brazil (TV 3.0).

 

Using ADM for MPEG-H audio formats

    • MPEG-H Audio enables content creators to produce immersive and personalized experiences, and metadata plays an essential role. All interactivity features offered to the user are controlled through metadata that are defined by the content creator during production. The process of generating this metadata is called “authoring” and is an additional element in the NGA production compared to legacy content creation. The result of the authoring step is typically a Broadcast Wave Broadcast Wave Format with embedded ADM metadata (BWF/ADM).
    • The Audio Definition Model (ADM) according to ITU-R BS.2076 defines an open metadata format for production, exchange and archiving of NGA content in file-based workflows. Its comprehensive metadata syntax allows describing many types of audio content including channel-, object‑, and scene-based representations for immersive and interactive audio experiences. It is acknowledged by ADM experts that application-specific ADM profiles are needed to achieve interoperability in ADM-based content ecosystems. Those ADM profiles incorporate the specific requirements for production, distribution and emission. To ensure interoperability with existing NGA workflows, applications adopting the ADM format should be able to convert native metadata formats to ADM metadata and vice versa such that artistic intent is preserved in a transparent way.
    • The presenters will explain the MPEG-H Authoring Suite and guide participants through all the steps of creating object- and/or channel-based MPEG-H Audio content, how to monitor and export immersive and interactive audio scenes, and how to use and experience this content on MPEG-H enabled mobile applications.

Adrian Murtaza is a Senior Manager, Technology and Standards at Fraunhofer IIS. Adrian joined MPEG in 2013 and since then contributed to the development of various audio technical standards in MPEG‑D and MPEG-H. He serves as Fraunhofer’s Standards Manager in a number of industry standards bodies, including DVB, HbbTV, SBTVD, ATSC, CTA, and SCTE. More recently he focused on specification of Next Generation Audio systems, as well as on enabling of MPEG-H Audio services in different broadcast and streaming ecosystems.

With a specialization in audiovisual media, Yannik Grewe joined Fraunhofer IIS in 2013 and serves today as senior engineer for audio production technologies, focusing MPEG-H 3D Audio. His research in 3D audio production and reproduction technologies has resulted in the publication of several papers on these topics.
He is extensively involved as a sound engineer in producing immersive music applications and MPEG-H immersive and interactive Audio for major events, such as the European Athletics Championships, the Eurovision Song Contest or Rock in Rio.
His current role includes a close relation to major broadcasters and streaming service providers in Asia, Europe and South America to enable MPEG-H Audio in their ecosystems.

 

 

Innovationskraft
When you participate in this event, your time will be used as co-financing for the Innovation Power Project, which is funded by the Danish Business Promotion Board and the Danish Agency for Education and Research at a standard rate. Read more about Innovationskraft  HERE

Danish Sound Cluster

Hold dig opdateret

Tilmeld vores nyhedsbrev: