In this final episode of the DARCI podcast, Mariana interviews the EAD team members about their work over the last four years. The conversation highlights how our specific research tasks, from technical plugins to creative collaborations and focus groups, are making film and television more inclusive for visually impaired audiences.

Transcript
Mariana: Hi, everyone. Welcome to the DARCI podcast, the podcast on Disability, Accessibility, and Representation in the Creative Industries. My name is Mariana López, and I’m a professor in sound production and post-production at the University of York. And today we have a very, very special episode because it’s actually sadly our last one, for now anyway, until someone gives us more money. And who knows, maybe that someone could be you. So we’re always open to contributions. So this podcast has been possible thanks to funding from the Arts and Humanities Research Council in the UK as part of Enhancing Audio Description Project. And we have now reached the end of this wonderful project and hence the end of our regular podcast. So we hope you at least miss us a bit and that as a result, you go back and listen to previous episodes. In our very first episode back in January 2024, we introduced you to the EAD team and what we were working on. Today, we’re going to end our podcast series by telling you what we have been up to and maybe what we hope to do next. I’m today joined by Chaimae Alouan, Krisztian Hofstadter, Gavin Kearney, and Michael McLaughlin. And we’re going to focus on what have been four main areas of our work. One has to do with how we continue to develop our EAD methods, so Enhanced Audio Description. And I’m going to kind of very briefly explain what EAD, so Enhanced Audio Description is, just in case, many of you might already know, but it’s basically an alternative to Audio Description that uses sound design as the main means of providing access to blind and visually impaired film and television audiences. And it does so by working on the integration and re-levelling, so changes in relative audio levels of sound effects, the use of audio spatialisation to locate objects and people in the 3D space by panning sound effects and dialogue, And thirdly, the use of flexible first person. That is some verbal commentaries, descriptions that are provided by a character in a fiction production or a contributor in nonfiction. And in some cases, a type of narrator central to the story. So work on sound effects, spatialisation and first person narration descriptions. That is what makes EAD EAD. And one of the main areas of work has been how to continue to develop the sound design strategies, by thinking about how sound may convey cinematographic elements that are crucial to the story. A second area of work has been moving beyond headphone-based forms of reproduction to loudspeaker-based EAD. And thirdly, has been our crucial work on film and television productions to find the most effective and successful ways in which we can collaborate to produce EAD. And finally, the development of technology which ethically and critically engages with ways to facilitate EAD integration to film and television productions. Throughout today’s conversation and always really, I would like to acknowledge the wonderful work of our advisory board who has provided feedback to us throughout the last four years, steered us to risky issues and provided invaluable support throughout. So thank you so much to all of them. So we’re going to get started. And Krisztian, I thought we started with you. And I was wondering if you would like to share some of the work you’ve been doing and your main findings with listeners. And I hope you say yes, because otherwise it’s going to be a really short episode.
Krisztian: Okay, I have to say yes then.
Mariana: Yes.
Krisztian: Okay, so I prepared something that I will read that sums up the study that Mariana, you and I did in the last few years. It’s kind of like completed and the paper is currently reviewed by you and then Gavin and then hopefully some reviewers at a journal. So soon people can read it online. So the study investigated how sound design can convey cinematographic techniques. We used only two EAD methods, sound effects and audio spatialisation. So that means we didn’t use the first person narration. To gather some ideas, we did a literature review and interviews with post-production sound professionals. With the findings, we designed an experiment that can test the effectiveness of various film sound methods in conveying specific cinematographic techniques. And these techniques were reaction shots, depth of field, more specifically rack and selective focus, tracking shots following sideways and lateral tracking shots, camera angles. What we tested was how high and low angles compare, and the last one was shot sizes, medium and wide shot sizes. So we had 33 participants with varying degrees of vision loss, and we did a within-subject experimented with them using pairwise comparisons of different soundtracks against each other and a neutral baseline condition. Some soundtracks used spatialisation, audio spatialization, some added sound effects or EQ changes, some of these soundtracks were diegetic and some of them were non-diegetic. The final results indicate that sound design can improve the perception of cinematography and that this was most significant when using combined methods, which often performed best when one of the methods in the combination was diegetic surround sound. So finally, the good news is that the results support the effectiveness of EAD as an accessible method for visually impaired audiences, in which key information is provided through sound design. So that’s the summary of the main task I was working on in this project.
Mariana: Thank you so much. So a bit to recap, so there was a series of interviews with post-production sound professionals and that kind of, in those interviews we asked them how did they sound design for specific visual elements. So picture editing changes, cinematographic changes, and then those findings were applied in addition to others to Krisztian’s experiment of working with 33 visually impaired people. And I would also like to acknowledge kind of … first to thank everyone that participated in the listening tests, but also the wonderful post-production sound professionals that kind of volunteered their time to be interviewed. And one of the things that we asked them as well is what they felt was the awareness of what their experience was of what the awareness was in film and television industries of visually impaired people as audiences and it was a bit sad maybe [laughs] to find out that they all kind of agreed that the industry did not discuss visually impaired people as film and television audiences. But the really great stuff that came out of the interviews was how eager and open everybody was to learn more about the project and how they could start changing their post-production sound practices to create more accessible production. So this was such a wonderful part of the process to get a group of people that acknowledged there was a problem but also wanted to be part of a solution. So thank you so much to everyone and thank you so much Krisztian for your summary. Is there anything else you’d like to add?
Krisztian: No, I think that kind of summed it up. Thank you for adjusting it.
Mariana: Thank you so much. So next, Gavin, Michael, you have been collaborating on aspects both of loudspeaker rendition and plugin creation. Would you like to tell us a little bit about it? I don’t know. I think Michael is ready to go maybe.
Gavin: I’m happy to start off, Michael, if you want to get into some of the psychoacoustic tests. Does that sound like good?
Michael: Yeah, that sounds good, Gavin.
Gavin: Yeah. Thanks, Marianna. So I think before we kind of get into the nitty gritty of what we’ve been doing, it’s good to give it a bit of context of the work that we have done in the past and how it’s led us to be looking at EAD soundtracks over loudspeakers. As everyone’s aware here, we started looking at EAD soundtrack creation over headphones. It was a natural way for us to think about how to approach the problem of giving enhanced audio description soundtracks to visually impaired end users, given that when they went to cinemas that they would typically have headphones that would give third person narration to describe what’s going on in the action on the screen. What we had been doing was to try to take the soundtrack elements that were there within the sound design, and to repurpose them to give a more theatrical presentation to the end user. The idea being that we’re not just doing stereo presentations, but we’re doing full spatial audio presentations using binaural surround sound. To do this, there’s a whole lot of like acoustics involved and a whole lot of filtering involved. And the end result that what you’re doing is you’re basically tricking the brain into thinking that the signals picked up at the ears are similar to signals picked up in reality. And so you can create the sensation that the sounds are external to the head. And what this does for visually impaired end users is it gives them a sound stage in which the action unfolds. So rather than listening to standard 5.1 soundtracks, they have, where they have all of the dialogue and Foley just like straight in the middle, they now hear the action unfold around them. Now when we originally did our tests with visually impaired end users and those without any visual impairments, it was very interesting results because visually impaired end users said, well this is a really nice alternative to audio description. We really like audio description. We still want to have that. And it’s what has been presented as a really good alternative context. Those without visual impairments actually surprised us though, because we thought that the discrepancy between how the sound sources would have been panned on the soundstage with the actors moving around the head would be quite disorienting when you see the actual action unfolding around you. This is one of the reasons why we have very much monophonic presentation of actors on the screen, so that the localization always stays at the screen. But we were surprised, because that group said, “Well, actually, this sounds fantastic. It’s a really great alternative. and can we hear more, please?” So we said, “Okay, well now maybe there’s something here where we could actually think about really having an inclusive sound design where we present the soundtracks over loudspeakers instead of headphones. So everybody will get the same experience.” Now, in order to do that, we have to break the standard film and television production paradigms. So again, where you usually have dialogue and Foley and all the character-driven sound effects all in the center channel in a 5.1, 7.1 or 7.1.4 surround sound systems. Now what we’re doing is we are actually panning the audio. So we’re moving the sound sources in a way that actually creates this new theatrical presentation and breaks a lot of the rules standard film intelligent post-production. So this leads to all sorts of potential issues, because if you are in a distributed audience and by that, what we mean is if you’re off the sweet spot, which is that small little area in the center of a loudspeaker system where everything is absolutely rendered perfectly. If you’re away from the sweet spot, you can get all sorts of mismatch between the audio and the visual. Particularly if you’re close to a particular loudspeaker, for example, if you’re sitting in an auditorium, you’re sitting front left and the loudspeaker system is trying to render, trying to pan a sound source between the front left and right loudspeakers, it will really just pull it to the left because of where you’re sat. And that’s what’s called law of the first wave front or precedence effect. So we were like, OK, we need to really understand the limitations of what we can do here with panning sound sources in this way. We know that when we get visually impaired end users into a loudspeaker system and we start to pan the sound sources over a loudspeaker system, we know there’s great benefit to doing that. We’ve run tests where we’ve shown that the error rate in sound source localization drops from 44% to 22%. And so this is really significant. So that’s where we had to get into modelling, and we had to get into thinking about different types of rooms and the different time and level changes that would need to be done in order to create an intelligent renderer that would say, “OK, you have this type of loudspeaker system. You have this size of room. I’m going to try to present an Enhanced Audio Description soundtrack to you within certain bounds of the spatialization.” So that leads us to some of the psychoacoustic tests. Michael, do you want to touch a little bit about some of the psychoacoustics that we’ve been doing?
Michael: Sure, Gavin, thank you very much. So yeah, it was very difficult to kind of tackle all of the issues that Gavin mentioned because there’s so many factors to kind of consider. But we started looking at series of listening tests that examined how well people can actually localize sound when they’re sat off axis from the sweet spot. And those were kind of the initial tests that I carried out, which were presented at ITDA in 2024, where we basically ran a series of minimum audible angle listening tests to see how well people could do when you move them off axis. And the idea behind those initial listening tests was so that we could make a kind of informed decision on the rendering side about what’s the lowest point at which you can begin to pan sounds without them becoming kind of blurred or difficult to localize. And that was kind of the first psychoacoustic test we started looking at. And that research has been followed on by other people. So particularly I want to give a shout out to the work of Mario Vallejo and Katya Sochaczewska, who were looking at other things related to this. So Mario’s research was actually looking at the perception of differences in reverb. So in film and television production, we often use different reverb plugins a sense of space. If you have a scene set in the kitchen, you know you’re not going to use a big concert hall reverb because it doesn’t convey the space. You still need to be able to know the minimum level that you can change a reverb if you’re going to different scenes. So Mario’s work examined some of that, where he was looking at varying levels of reverb with different direct reverberant ratios, so that on the Intelligent Renderer and for content creators or producers or sound designers or anything like that, how much they actually have to vary the revert by in order to convey something like a scene change. And Katja’s work also followed on from that offset localization. And that’s currently being finished up now. And Katja has devised some really elegant listening tests that are much better than the ones I initially carried out to follow up on this research. And likewise, we’re getting all this information, but we’re trying to also figure out how do we use it in a meaningful way, which leads us on to the plugin development that we were doing for the project as well, which was headed up by Patrick Cairns, looking at developing VST audio plugins so that people can rapidly create EAD content with these things in mind. Now, a big problem I think that EAD has often gone out of its way to take into account, and it’s really good, is that Audio Description and any kind of accessibility features tends to come at the end of the production lifecycle. And EAD is always pushed to have that integrated from the very beginning. And there’s also a need to acknowledge that we need to create tools that allow us to rapidly create accessible audio. And some of the work that Patrick was doing was looking at that kind of stuff. So if we need to rapidly create accessible, if we need to rapidly create Enhanced Audio Description, we have to be aware of a lot of things such as metadata tagging and the like. So Patrick’s work looked at implementing things machine learning to automatically implement this metadata tagging so that we can speed up the production process and have more of an emphasis on the creative aspects of Enhanced Audio
Mariana: Description. Oh thank you so much. Anything anyone wants to add?
Gavin: I mean just I think to add on to what Michael was saying there, I mean the importance of creating these post-production tools can’t be understated. I mean, one of the things that we don’t want Enhanced Audio Description to do is to add to production overheads. Because if it does, then it’s going to be very challenging to be able to push it out into the industry in a meaningful way. The idea of using machine learning and artificial intelligence is to really help take the grunt work of taking the audio assets, metadata tagging them, and allowing an intelligent renderer at the end of the pipeline to say, okay, you’ve got these sound effects, you want to bring them up in a particular way, or you’ve got dialogue, you want to enhance it in a particular way, “I can do this for you based on knowledge of the room that you’re in, based on the loudspeaker system that you have, or whether you’re rendering over headphones.” And this is all about personalisation. And in order to get to that level of personalisation that will really create accessible mixes, we have to be able to remove the pain of the grunt work for sound designers and create methods in which they can quickly audition different ways of rendering EAD soundtracks without having to go to the burden of actually doing a lot of the grunt work mixing themselves.
Mariana: Thank you so much. And something to kind of that brings this together nicely with the next point is that here we’re thinking about processes that help speed up the process of creating an EAD accessible mix, but it doesn’t take away from the fact that every EAD mix is really a piece of art. It’s something that adapts to the original film or television productions, and that provides access in a way that fits that production. And this is really important to the work we have done with different film and television teams. We have had the pleasure of working six creative pieces throughout the project. We’ve explored both live action and animation, both fiction and non-fiction work. And every, as I said before, every piece we work on is really unique in the challenges that it brings and the opportunities it also brings on expanding what EAD can do for different types of productions. And we have had the pleasure of collaborating on the short animated documentary film Visible Mending by Samantha Moore, which was nominated for a BAFTA, the short fiction film Spines by Joseph Inman, the documentary A Spectrum by Jack Morris, James Edward Kilpatrick and Michael Reagan, an episode of Mami Fatali, which is a Polish children’s animation series by GS Animation. This was a collaboration with Monika Zabrocka that explored how different types of accessibility could work for visually impaired children. We also had the pleasure of working on a half-animated, half-live action documentary, Follow the Dogs by Isabel Garrett, and last but not least, the incredibly moving documentary, Finding Venus by Mandy Lynn. And in this particular example, we worked on really defying the need for the descriptions. So here we refuse to describe the women on screen to maintain ethical principles throughout and avoid further objectifying the women in the documentary and remain true to the topic of the production. So we worked really hard with Mandy to find ways and the producer Carrie Thiel to find ways in which we could make the production accessible but actually maintaining that as a core principle that we use to defy descriptions and think about how to make a sensitive topic accessible. All the productions we have worked on, the EAD was created through a collaborative process with filmmakers, but also really crucial was the collaboration with visually impaired people through focus groups. The work was, I should say, really important. And it was possible because filmmaking teams gave us access to all their original audio assets and allowed us to have not just access to audio stems, which are a combination of different audio elements of production, but in many cases full access to their digital audio workstation files so that we could create the EAD mix. And here we can think about EAD as a remixing, a relayering as well, but remixing of a production to make it accessible. Also really important to this collaborations was the work on first person. And that first person work, what we did in the last four years was really create a workflow for how to introduce EAD into production. And here the first person is crucial because we need recordings from either contributors a documentary or actors in a fiction production. So it’s really important that we consider whether that is something we can do or there are limitations. And the way we have worked, which has been really successful, is the EAD team providing a kind of a list of key moments and type of information that needs to be added with the first person. And this are generally things that cannot be easily or successfully conveyed through sound effects or spatialisation. Then getting back lines from the creative team, giving feedback, going back and forth, and then the creative team doing the recordings and us incorporating them into the mix. So that has been a really strong aspect of collaboration. And of course really, really important is that all the feedback that we provide throughout this process has been based on years and years of listening to visually impaired people and the sort of information that they would like including. And with every production that we do we learn something new that we can take forward. Most of our productions are now online so please do head to enhancingaudiodescription.com to find links to the pieces but also you will find there are different recorded talks and podcast episodes about the productions. There’s a demo reel that you can watch and has an A/B comparison so you the production sounded like before the EAD and what it sounds after. And this is really important because the idea with EAD is that it’s so seamlessly integrated to a production that you might actually think that what you’re listening to was in that original, for the lack of a better word, production, but it’s actually part of the accessibility process. So do head to the demo real if you’d like to kind of get into the nuances of what has changed in each production. The feedback from both filmmakers and audiences has been incredibly positive, showing the potential of sound design really to do more than just offer descriptions that continue emphasising the visual as the crucial aspect of the production. So EAD is about accessibility that challenges ocular centrism, challenges this idea that it’s the visual that matters and the audio is secondary and cannot actually tell a story, which is definitely not true. Here we have considered audio as equally valuable and as a means to provide an experience that goes beyond words, beyond the verbal. I should also add that our focus groups continued to demonstrate that a key aspect of EAD is really the potential for it to foster connections among visually impaired people and sighted people. Throughout our work, people have told us, visually impaired people have told us how important it was for them to be able to experience film and television with sighted friends without feeling that Audio Description was getting in the way. And they have told us how much they feel EAD is providing that opportunity and that social connection. And the focus groups have been really crucial to elevate the quality of our work. We focus group absolutely every production that we do before we sign it off and we make changes based on the feedback and always making sure visually impaired people are at the centre of our work. And here I would like to invite Chaimae to tell us a little bit about those focus groups because Chaimae you’ve done enormous amounts of work organising these groups, making sure they’re successful. So would you like to share some insights? I hope so.
Chaimae: Yeah of course. Thank you Marianna, thank you so much. And we all do and did a lot of work for organising and planning the focus groups. So to talk about the focus group, the first thing that came to mind is the importance of participants. So participants are the most important part of our focus groups. So everything we do is really about making sure they feel comfortable and supported when they decide to take part in one of our focus groups. So as you, Mariana, explained, once a production is ready, we usually start by planning the session and deciding how many participants we need. So we aim for around 15, but it’s usually between 12 and 15 visually impaired participants per focus group. And we also plan on how the session will run including the screening of the film, the survey and the discussion. Then we move on the recruitment, so we send out the call through our participants mailing list and make sure participants have all the information they need in advance. We stay in contact with them so they can ask questions about anything including a question about the project, directions or when they are planning their travel journey. Another thing that’s really, really important is being clear about the film content, especially if there are sensitive topics. We always communicate that beforehand so participants know what to expect. And we don’t just share this with participants, but also with everyone who will be in the room, such as ambassadors, that are supporting participants, team members or take team helping on the day. So everyone is aware of any sensitive content that film might have. We also make sure everything is accessible from the information sheets and consent forms to the experience on the day itself. In terms of compensation, we provide it in cash to give participants more flexibility and we also cover their travel. We welcome people from all over the UK so as long as they are happy to attend of course. We also make sure to account for all the time they spend with us including things like completing forms such as travel reimbursement form so that time is compensated as well. Yeah so honestly one of the best parts is seeing participants enjoy the experience and want to come back. We have had people join multiple sessions which really shows how valuable their input is. So yeah participants of course are the centre of the whole process and as you mentioned their feedback is what helps improve the final version of the Enhanced Audio Description.
Mariana: Thank you, thank you so much Chaimae, loads of really important points there including how important it is for us that we pay participants for their contributions, this is something that is not always talked about much in accessibility projects but people are giving their time and if there is funding available. And this is why funding for projects on disability and accessibility is so crucial to their success because a lot of it is really, most of it really is about recognizing people’s time, making sure that if they have to travel, they can travel so that we’re not just relying on a very small pool of people, but also that the time they spend with us, they are recognised as true contributors of the work. So it’s something that we always recommend everybody thinks about when thinking about disability and accessibility projects. But yes, really, really great to highlight the people join us from all over the UK and some people have been coming for, I calculated the other day, for about 10 years because they have participated in this project and also the previous one. So some participants we have known for a very, very long time and we’re really honoured that they want to continue engaging with the project. It’s a huge compliment to all of us and we also always try to welcome new participants to expand the reach of the methods but also make sure that we’re getting balanced opinions and different takes. So thank you so much to everyone that has contributed to our focus groups, surveys, interviews etc throughout the years. So the last thing really is I was wondering if each of you could tell us what you enjoyed the most about the project and kind of some key takeaways. And Krisztian shall we circle back to you as they say.
Krisztian: Okay, what did I enjoy the most? Well, you guys know that I don’t live in York so we rarely see each other but for people who are listening I guess, you know, I just clarify that I work or have been working remotely. And because of that or because of, you know, how I am, I guess, or both, the thing I enjoyed most was going up to York and meeting as many of you as I could, because I just like you, I guess.
Mariana: Ooooooooo! [laughs]That required that response.
Krisztian: Yes, I think that being together with people was great. Being together with you people was great. Yes.
Mariana: Thank you so much. Do you have any key takeaways from the project?
Krisztian: Key takeaways? I had some takeaways when I was up in York.
Mariana: Not that kind, the more philosophical kind. [both laugh]
Krisztian: Yeah, okay, well I’m just trying to be funny I guess. Well, I learned a lot, obviously, about film sound. Key takeaways. I guess now I’m thinking about it. Good things take time and communication is key. And I guess in order to you know, have good results, I think, I think, you know, communication is key. That’s, I’m not sure whether that’s a, you know, my key takeaway. I kind of knew it, but it just kind of felt like, you know, again, it was reinforced that if people, you know, communicate very well with each other, but then you can get the results you need.
Mariana: Great, thank you. Thank you so much. And so important communication to EAD where we’re working, it’s not just about the research team. Of course, it is about thinking about filmmaking teams. And it’s also about thinking about visually impaired people that as Chaimae kind of very eloquently explained is central to our work. We cannot do anything without, and we shouldn’t do anything without actively involving visually impaired people. So loads of different strands to consider. Thank you so much for that. And we are very happy that you enjoyed coming and visiting us in York.
Krisztian: Thank you. [both laugh]
Mariana: And the actual food takeaways! That’s good. Gavin, do you want to go next?
Gavin: Sure. Yeah. I guess there were two real highlights for me. I mean, I think it’s really worth mentioning the DARCI Conference last year was such a great opportunity, not only just to showcase this work, but also to really see the diverse research that’s happening on disability and accessibility topics. It was really incredibly inspiring and fantastic keynote speakers, Hannah Thompson, Raymond Antrobus. It was a wonderful event and the vibe was just fantastic for the entire conference. So that was a real highlight for me. But I think the thing that I really enjoyed the most was the mixing of the program material that we worked on together, Mariana, because it felt like we were breaking the rules. [laughs] And I really liked that, that we were just throwing the entire playbook of spatial mixing out the window and really just going for and trying things out and getting some very positive feedback from focus groups. And the main takeaway for me is that actually when you start down that road as a as a mixer or sound designer, it’s hard to go back to the limiting paradigm of standard film and TV production. And a good EAD soundtrack really should sound like it. It was always supposed to be that way. And I found that when we went back to the mixes that were just standard film and TV mixes, that they felt flat and didn’t have any life to them. Not that they weren’t good mixes, but the EAD soundtracks with all of the elements, including the first person narration, it just came to life. And I really I hope that the demonstrations that we’ve put together are inspiring to film and television production companies to really kind of see that this is not just a band aid to put on an existing soundtrack. This is a rethinking of the entire pipeline and for the benefit of all end users. So yeah, that’s my takeaway.
Mariana: Thank you very much. Although after Krisztian told us how much he liked spending time with us, I feel that you should have added, I also like spending time.
Gavin: Of course, goes without saying. [both laugh]
Mariana: I feel like now you should all say it just so that, yeah, you don’t think … But yeah, so really great points there about what it means to remix for EAD. Also something that anecdotally we have found, we have not done an official study about this, but some of the filmmakers have come up back to us and told us how actors, contributors, different people involved in the film sometimes have said they prefer the EAD version after they listen to it. We actually don’t know why that is, but that happens. But it’s kind of worth putting out there. But I also would like to say that the productions we have worked with have been mixed by wonderful sound professionals, and we know how difficult it is to let go of something and let someone go and rethink a mix, and we only really make changes that are for the purposes of accessibility, and it’s always kind of that balance between respecting what was the decision of the production and aesthetic decision but also making changes for accessibility so we always kind of balance those two strands. I always make a point when I first contact creative teams is to tell them that I want them to love the EAD version as much as they love their other version that is really really important to us. So some great points there, so thank you so much. Michael, I think you should just open with just saying how much you love us. [laughs]
Michael: I was going to do that anyway, no, working with you all has been a real highlight, especially like, I mean, you know, we’re here talking on the podcast, but we’ve had so many people who have come true and like worked with us from, you know, research assistants and interns and things like that. In particular, I want to give a shout out to Jo Tsai and Sinuo Feng, who did some really great work while they were working in the AudioLab with us. It’s been fantastic. I think there’s sometimes an impression that working in academia can be kind of a solitary kind of thing. But with this project, I felt it was the complete opposite, especially when it came to events like the DARCI Symposium in Edinburgh or the conference we had here in York and the workshops we do as well. We got to meet so many amazing people and really learn from them and that’s one of the best things I think to come out of this project. It’s been fantastic. There’s all the other stuff that goes with this work. I mean, I enjoy the technical challenge. I love doing the psychoacoustics research. I love the coding. I like a good challenge in that regard. Making my life difficult, maybe. But I think it’s a very interesting thing that a lot of the interesting research challenges kind of emerge because this is taking, the interesting research challenges emerge because we’re considering accessibility from the very beginning. You discover so many things and so many things that you would never consider if you weren’t working in this area. So I think that’s a kind of big takeaway which I would walk away from this with.
Mariana: Thank you so much and last but not least, Chaimae, what about you?
Chaimae: I do really enjoy my time with you, with all of you. So yeah, like the last …
Mariana: Ooooo!
Chaimae: … to say the word, I might copy some of your words but I really, really, really enjoy the moments where we met in person for either the DARCI conference or the symposium in Edinburgh or any other meetings that we had before. Yeah, so I’m really happy that I was able to work with such a team that is supportive and like a very healthy environment. The working in the EAD project was my first job when I came to the UK and I’m grateful for that. I’m really grateful that I was able to work in such a great environment. Yes, and for the takeaways, yes, for the takeaways, I’m grateful for the opportunity to learn more about accessibility in a practical way. When I did the focus groups, I learned a lot about accessibility in every aspects like for being in person with visually impaired participants for example or to make forms accessible or accessibility in general. So I’m really grateful for that as well.
Mariana: Oh thank you so much. And that’s another really great point, well the first great point is how wonderful we all are so that goes without saying [laughs]. But the second great point is how important… all, … so we tend to think about um research and creative projects as you know it’s all about the thing you create and the output that you generate but a lot of the learning and actually even new knowledge comes from those processes that accompany the creation of those outputs and it’s how to make things like… making sure service or access surveys are accessible focus groups are hosted in a way that they are welcoming and inclusive and anything else that goes around it. And Chaimae you’ve done an excellent work reading and kind of honing those systems and making sure they get better and better every time so thank you so much. Any final words anyone wants to add? It’s okay if not. No one’s unmuting themselves so I’m going to take that as you feel you’ve said what you wanted to say. So it seems like we have reached the end of our last episode. So it’s quite an emotional moment. I, Chaimae and Krisztian can correct me, but I think I have recorded 29 episodes of the DARCI podcast since January 2024. And although, although listeners might have heard quite a lot of me and hopefully of the guests as well, hopefully I’ll let them speak. But I would also like to acknowledge wonderful work both Krisztian and Chaimae have done in editing the episodes, transcribing them, making sure everything was hosted properly on the website and helping in the dissemination of the work that we do. We’re incredibly proud of the wonderful guests that we have had throughout the last two years plus. We have had episodes of all sorts of topics in terms of what accessibility and representation they focused on, but also in terms of the creative industries people represented. We have had researchers, we have had professional practitioners, we have had independent professionals, but we also have had industry representatives, we’ve thought about film, television, we’ve thought about theatre, gaming, musical performances, and gallery and museum exhibitions and we hope that you have all enjoyed this podcast series. Thank you so much to everyone for listening, do keep engaging with our work online, of course our episodes will remain available to you so you can revisit them, you can share them and please we do appreciate if you do that, but also a reminder that our website enhancingaudiodescription.com will continue to host updates on our work. We do also do a regular social media posts with updates of work we’ve been doing and dissemination. So do check out how to reach those on our website. And we do have a newsletter you can sign up to as well. We don’t send too many emails, but if you’d like to know what’s going on, have access to possible events and ways of getting involved, that’s a great way of doing that. And so thank you everyone for listening!
Michael: Cheers everyone.
Krisztian: Bye-bye!
Gavin: Thanks everybody.
Mariana and Chaimae: Bye!