DARCI Ep.01

We're pleased to introduce our new podcast, DARCI, which stands for Disability, Accessibility, and Representation in the Creative Industries. Our first episode is a brief introduction to what we hope DARCI will achieve and to what our team members have been working on.



A photo of studio with a computer screen, microphone and headphones.

You can listen to the episode with the audio player embedded below. There is also a transcription underneath.


Transcription of the podcast episode:

Welcome to our new podcast called DARCI, which is for Disability, Accessibility and Representation in the Creative Industries. I am Krisztián Hofstädter, a member of the Enhancing Audio Description (EAD) team at the University of York. Our EAD project aims to improve the accessibility of films and television shows for visually impaired audiences. The project proposes new techniques that go beyond traditional practices by incorporating sound effects, sound spatialization and first-person narration. The project integrates these techniques into the entire production workflow and involves visually impaired audiences in the design process. We also seek to create guidelines for incorporating these methods into professional broadcasting pipelines and film making workflows. Ultimately, the project aims to provide inclusive film and television experiences. Through this new podcast, our mission is to shed light on work in the fields of disability and accessibility, in particular in relation to the arts and that aim to create a more inclusive society. From theories to practical methods, we’ll dive into the realm of accessibility and explore how these methods can impact various aspects of our lives. Accessibility is at the heart of our research and throughout this new podcast, we’ll explore how others are working towards greater inclusion as well. We’ll discuss theories, methodologies and innovative approaches addressing the challenges faced by disabled people. But before we dive into other people’s work, let’s start by introducing our own work. In this first episode, members of our team will introduce themselves and the work they do.

Hi everyone, my name is Mariana López and I’m a Professor in Sound Production and Post-production at the University of York. Within the Enhancing Audio Description project, I’m the principal investigator and it is a project which originated from work I started over a decade ago on how sound design can be used as a vehicle for access for visually impaired audiences. The project proposes a new paradigm in the field of accessibility, one in which we break away from conventions that determine that words and the voice are the only way of providing access and instead we explore how other types of sounds, such as sound effects, together with spatialisation can be integrated to film and television productions to make them accessible while providing a seamless experience in which production and accessibility are intertwined. One of my main roles within the project involves working with film and television makers to make their productions accessible through our EAD methods. I work with directors and producers and come up with the initial sound design plans for EAD for their work, including setting the direction for EAD for each production because every production is different, collaborating in the writing of the first person or alternative processes if needed, and doing the sound design and editing which is then followed by work and mixing with my colleague Gavin Kearney. While doing this work we’re working on answering a key research question around how to best integrate EAD to film and television creative and technical workflows. Every time we work on a new production we expand the remit of EAD and we come up with new exciting ways of making audio visual media accessible through sound design. In addition to this I also explore EAD as a cinematic experience and how it differs from audio-only formats such as radio drama, podcast experiences and audio games, while also drawing from these very many different formats for inspiration. Finally I do a lot of dissemination for the project and this includes liaising with stakeholders which include blind and visually impaired people, charities and film and television professionals as well as production companies and broadcasters.

Hello I’m Gavin Kearney, Professor of Audio Engineering at the University of York and I’m co-investigator on the Enhancing Audio Description project. My main role on the project is to lead the technical development of spatial audio solutions that can create more accessible soundtracks for visually impaired audiences whether they are consuming film, TV or interactive media. Spatial audio is all about recreating a 3D soundscape, replicating how we perceive sound in the real world. Traditional audio description services offer third-person narration of visual scenes in movies and TV shows and while this is invaluable they often fall short in capturing the full emotional impact of a scene particularly when it’s the music and sound effects that are crucial. This is where our work comes in, we use spatial audio to present binaural remixes of the soundtracks. Binaural remixes are like a magical transformation of the original soundtrack and we use specialized audio processing techniques to spatially place the audio around the listener. When a visually impaired individual listens to these binaural remixes through headphones they experience the soundtrack in a way that immerses them in the story. For instance if there’s a scene where a main character is walking in a forest they can hear the leaves crunching underfoot all around them or when there’s a dramatic crescendo in the music it could feel as though it’s sweeping up from below and soaring above their head. So our approach doesn’t just describe the scene it allows users to feel it to be a part of it and it’s a significant leap beyond traditional audio description which can be more detached. The technology behind these binaural remixes relies on the principle that our ears perceive sound in a way that allows us to determine the direction and distance of a sound source. So we use special 3D audio filters that replicate these cues and during the remixing process we manipulate the audio to ensure it’s accurately placed in the 3D sound stage. Our ultimate goal is to bridge the gap between what visually impaired and sighted individuals experience in the world of entertainment. We want to create more inclusive and emotionally engaging experiences for everyone so we’re also looking beyond headphone reproduction to see what’s possible using loudspeaker surround sound where all members of the audience hear the enhanced soundtrack. In a cinema traditionally all of the character driven sound effects and voices are placed at the center loudspeaker and this is challenging for a visually impaired person to follow what’s happening and so instead we propose to break the norm by having all of the character driven sound effects move around the surround system following the movements of the actors on the screen and making a more immersive experience for everyone. Overall the possibilities that spatial audio offers for enhancing the entertainment experience for visually impaired individuals is remarkable. By using cutting edge technology to recreate the 3D soundscapes and immerse users in the storytelling we are taking a significant step towards a more inclusive and emotionally engaging entertainment experience for all. So our research will continue to push the boundaries of this field refining 3D audio rendering and production workflows to support enhancing audio description. We are committed to ensuring that every individual can fully appreciate the magic of movies, tv shows and games. Our mission is to make the world of entertainment more accessible and enrich the lives of countless individuals by bringing the joy of immersive storytelling to all.

Hi my name is Michael McLaughlin, I’m from Galway, Ireland. I started off music technology but then went on to do research in bioacoustics, machine learning and animal welfare. I also did some freelance work primarily doing audio and sound design for documentaries. Currently I work as a research associate on the Enhancing Audio Description Project. I’m based over in the Audio Lab alongside Professor Gavin Kearney and while a lot of people look at sound design strategies, outreach and the creative challenges surrounding EAD, I handle technical issues. For example, how will where the listener is positioned affect how they perceive sound over a stereo system? What about different loudspeaker placements? I’ve been investigating some of these things and designed some listening experiments to see how it may affect EAD content. I recently presented some of this research at the Immersive and Treaty Audio Conference in Bologna, Italy. This understanding is crucial for creating personalized audio experiences, especially for visually impaired audiences. Our recent experiments focused on establishing a control group with sighted participants. We’re getting ready to run the same experiments but with visually impaired audiences. By comparing the results of these two experiments, we’re going to derive a new rendering algorithm which is tailor-made for visually impaired audience members. In the future, I will be involved in the development of audio plugins to help content creators quickly implement these methods into their creative practice. I’m also interested in how we can adapt and utilize current technologies for delivering object-based audio to enable people to have the most personalized experiences when they enjoy EAD.

Hi, my name is Chaimae Alouan, a sound engineer and a PhD student. I’m currently researching audio description in Morocco at the University of York. In this context of the Enhancing Audio Description project, my primary role is that of a project officer. This position involves direct engagement with administrative tasks, including the management of financial and administrative processes essential to the project’s functioning. Additionally, I handle the planning of upcoming project conferences, workshops and events.

Hello, it’s Krisztián again, the other postdoc working on the project I closely collaborated Mariana. My main focus has been on running an experimental study that looks at further developing two of our EAD methods to help convey cinematographic elements through sound. Examples of these elements are types of camera shots, camera angles and camera movements. One of the EAD methods that I work with in this study uses sound effects, for example, to provide information on actions, to elicit the presence of establishing shots, to convey abstract scenes or to indicate time, place and the presence of characters. The other method I’ve worked with in this study is surround sound spatialization that can help convey the position of characters or objects portrayed on screen. This study uses short clips from the TV show Emmerdale, produced by ITV, the feature film Notes on Blindness by Pete Middleton and James Spinney and the short film Ecce Homo by Dimitar Kutmanov. For each clip, I drafted different sound design techniques that are compared in pairs to help identify the best technique to be considered for our work. This is an ongoing study and I look very much forward to sharing the results with you. Apart from trying to answer research questions with studies, I also maintain our website at enhancingaudiodescription.com and assist in organising events.

And that’s our team. If you have any comments, please email us at enhancingad@gmail.com. Also, please consider subscribing to our newsletter for which you can find a form on our website’s home page at enhancingaudiodescription.com. We aim to publish one podcast episode per month. The next one will be with Joe Inghman. Mariana and Joe will discuss Joe’s film Spines, the first BFI network funded film to be written, directed and starring an autistic person. Until next time, remember, inclusion is not just a goal but a fundamental right for all. Let’s make a difference together.

Photo by Will Francis on Unsplash.