h1

Personalized Object-Based Audio for Hearing Impaired TV Viewers: new paper published (open access)

May 4, 2017

 

hearing-TV copy

For a long time now Ofcom and broadcasters have received complaints that speech on TV can be difficult, or impossible, to understand. The problem is of course much worse for viewers who with even quite mild hearing loss. Reasons are varied and well described in a recent Conversation article from one of our researchers, Lauren Ward. Causes include unfamiliar accents, unclear speaking from actors, excessive background sound effects or music and even occasionally badly recorded location audio.

Recent highly publicised complaints about Poldark, Happy Valley and SS-GB, and earlier, Jamaica Inn and Wonders of the Universe have generated something of a media storm over the issue which shows no sign of abating and was debated recently in the House of Lords.

At the Acoustics Research Centre atmini pic 18562 University of Salford we’ve been working on solutions to these problems for a long time, currently a lot of research is looking at how new object-based audio formats can solve some of these problems and make TV sound more accessible. Object-based audio has the potential to allow individual personalisation of TV sound based the viewer’s preferences or needs. Our most recent work, carried out using the DTS MDA object-based audio format can be found in the Journal of the Audio Engineering Society article here. It is freely available as an open access publication – please feel free to read and comment. You can find more details of our accessible audio work on our blog here.

open access

h1

SALSA at NAB2016: Real-time audio post production for Sport

May 12, 2016

P4194897.jpg

Last month saw us take the latest version of our SALSA (Spatial Automated Live Sports Audio) software to NAB 2016. This is the second show Rob Oldfield and I have done in collaboration with DTS and Fairlight; last September saw us at IBC in Amsterdam showing our automated sports audio solution working with the MDA open object-based audio format. April saw the new version of University of Salford’s SALSA software demonstrated.

Our new version goes a step further than automated mixing and can now augment the on-

Screen Shot 2016-05-03 at 23.37.57

SALSA detecting sound events in real time using standard pitch-side mics

pitch audio (e.g. ball kicks) with pre-produced content to enhance the mix still further using acoustic signatures derived from the detected sounds. Real-time post-production if you like.

Results so far sound impressive. It’s not the first time sports audio has been enhanced of course. Sound supervisors and sound designers like Dennis Baxter have been doing this for years. Watching horse racing? That sound you hear isn’t the sound of horses coming round the track, instead it might be a slowed down recording of a buffalo charge. Downhill skiing? Samples played into the mix live from a MIDI keyboard. It all adds to viewers’ engagement in the entertainment of sports broadcast. Some of this is essentially Foley for live sport, the big difference is we are now automating the process in real time. Audio augmented reality for live sport. Expect more updates as we continue to develop the work…

 

 

h1

AMS Neve visit: “Better known in Hollywood than Burnley”

November 25, 2015
AMS Neve: Better known in Hollywood than Burnley

AMS Neve

AMS Neve came from a merger of two legendary English audio companies; Neve, famous for high quality analogue mixing consoles, and AMS (Advanced Music Systems) the Burnley based audio innovation company. Last night I was fortunate enough to be able to attend an Institute of Mechanical Engineer’s visit to AMS Neve in Burnley. Feeling something of a fraud (I’m not a mechanical engineer but the event was very kindly opened to a limited number of non-members) I was warmly welcomed by IMechE attendees, and also by Mark Crabtree, founder of AMS and Managing Director of AMS Neve.

The last time I’d visited the site, on a visit from college around 25 years ago, AMS were demonstrating their Logic 2 digital mixing console and Audiofile, one of the first hard disk digital audio editors regularly used in TV post-production. In around 1990 it was the first time I’d really understood what could be done with digital audio and at the time these were stunning pieces of technology. The Audiofile used by Editz, where I did a short placement during my college days, was much revered. As well it might be with its hefty price tag at the time.

DSC_0047

AMS Audiofile

During Mark’s presentation Read the rest of this entry »

h1

SALSA: Spatial Automated Live Sports Audio – demonstration at IBC 2015

October 31, 2015

IBC-5

In addition to the Object Based Clean Audio demos that Rob Oldfield and I gave at IBC 2015 we were also showcasing the results of a long running development project in audio for live sports broadcast. Thanks to some internal funding from the University of Salford‘s Staff Innovation Challenge competition, and to some great work from Darius Satonger we have done some further development on research that we started on the EU FP7 FascinatE Project capturing audio objects from a live football match.

Live football broadcast - OB at EastlandsDuring the development work we spent quite a bit of time in outside broadcast trucks with some of the best mixing engineers around – the skill levels of these guys is impressive and watching them using mixing desk faders to follow the ball movement around the pitch during a football match made us realise the complexity of their job. The aim of our work was to find ways that would assist the mixing engineer in creating a great mix for both conventional, and object-based broadcast, and to ensure that transition to object-based broadcast was a painless one by tailoring our additions to existing workflows.

The software we have developed over the last few years works in real time by matching on-pitch sound events to a database of audio object templates in order to identify sounds that we want to capture such as ball kicks and referee whistle blows. Once a sound is identified the software identifies the location of the sound (typically to within around 50cm), isolates the sound as a short-duration audio object and tags it with metadata detailing the type of sound, its location on the pitch and its duration.

Once the tagged audio object is created it is used in Read the rest of this entry »

h1

IBC 2015: TV Sound for Hearing Impaired People

September 23, 2015

IBC 2015 Demonstration of Object Based Clean Audio

hearing-30097_640The problems of hearing impaired people watching TV have been well documented of late. Loud music, background noise and other factors can ruin the enjoyment of TV for many people with hearing loss – around 10 million people in the UK according to Action on Hearing Loss.

In previous research funded by the ITC and Ofcom I looked at solutions that took advantage of the (then) recent introduction of 5.1 surround sound broadcast. Some of this ended up in broadcast standards and is being used by broadcasters. Now emerging new audio standards are opening the door to improving TV sound much more for hearing impaired people, and also for many others.

I’ve written about some of this work before, a recent blog post described our journal article in the Journal of the Audio Engineering Society where my colleague Rob Oldfield and I picked up where my PhD left off and looked at how we could improve TV sound for hearing impaired people by using features of emerging object-based audio formats. In object-based audio all component parts of a sound scene are broadcast separate and are combined at the set top box based on metadata contained in the broadcast transmission. This means that speech, and other elements important to understanding narrative, can be treated differently compared to background sound (such as music, noise etc).

I’ve just returned from IBC in Amsterdam where we’ve been demonstrating some University of Salford research outputs on object-based clean audio with DTS, a key player in object-based audio developments.

IBC-9

IBC 2015: The largest global electronic media and entertainment show in Amsterdam last week.

Object-based Clean Audio at IBC 2015

Last week we spent a week showing the results of our recent collaboration with DTS – presenting personalised TV audio and Read the rest of this entry »

h1

PhD Studentships in Perception of Sound: Deadline: 30th September 2015

September 16, 2015

PhD studentships in perception of sound

Interested in audio perception? We’re looking for someone to research games audio, binaural, emotional response to sound, or audio quality. You would be working with the S3A project, which is studying the future of spatial audio from production through to reproduction at home. The goal of S3A research is to introduce practical technologies for spatial audio that will revolutionise how listeners experience sound. S3A is a collaborative project between the Universities of Surrey, Salford and Southampton and the BBC Research and Development. You would be based in Salford.

Entry Requirements
Candidates should Read the rest of this entry »

h1

Clean Audio for TV broadcast: An Object-Based Approach for Hearing-Impaired Viewers

May 18, 2015

open access

Update: I just uploaded a new journal article published last month in the Journal of the Audio AES Journal paperEngineering Society. Happily the University of Salford paid for it to be open access as part of their open access strategy so it’s freely available for anyone to download.

The paper (right) is a follow up to my PhD research, which was all about looking at methods to improve TV sound for people with hearing impairments by enabling what is usually known as ‘clean audio’ – helping hearing impaired people to hear speech more clearly than is often the case.

The early part of the PhD was funded by the ITC and then Ofcom, later parts were carried out as part of the EU funded FascinatE project which I’ve written about on this blog before. The FascinatE project utilised what is known as object-based audio to implement interactive 3D (with height) spatial audio which would vary depending on the user-defined visual scene. The FascinatE viewer could have free navigation of the visual scene and point their ‘camera’ to whatever part of the video panorama was of interest to them. The audio ‘objects’ rotated and/or moved to match the chosen view.

In the more recent clean audio work above Read the rest of this entry »

h1

I just made (most of) my research papers open access

April 17, 2015
open access

From wikimedia commons

Having ranted a few years ago about the issues around open access to academic research outputs, and the lack of it, I’ve spent quite a bit of time chasing up recent developments in publishing guidelines. As a direct result of research funding organisations insisting on open access to publications that they have funded the situation has improved considerably.

In many cases either authors can publish pre-print versions of their work on institutional repositories like Salford’s one here, in others author’s can self-archive on personal websites only. Unfortunately this sometimes specifically excludes any site that utilises metadata to enable searching of archived material. So you can publish (or at least self-archive) the publication on any site that does not allow it to be searched for amongst other publications. At least this means that I can point researchers who email me for papers at a link which is progress. There is a great guide to what is available here on the SHERPA site.

Anyway, having researched the various policies around publishers that I have submitted work to I have created an archive on a personal web site as permitted and most of these papers are available for free download.

Here it is.

There are still Read the rest of this entry »

h1

DIGITAL SKILLS FESTIVAL 2015

December 3, 2014

Useful event for our students at MediaCityUK.

UoS PSVT - events and opportunities

digital day

Wednesday, February 11th

The largest digital recruitment fair in the UK is back for its fourth year: dedicated to students looking for careers within the digital industry.

Talent Day is the ideal place for students studying computer science, IT, design, digital marketing and media to meet digital businesses across the North West and find jobs, placements and work experience.

The event is run by Manchester Digital, the not for profit trade association for digital business in the North West and fulfills part our remit to build connections between education and industry.

Over 60 businesses will be in Manchester Town Hall on Wednesday, February 11th looking for students to fill placements and full time roles. Opportunities on offer will include app development, games development, front and back end development, design, marketing, social media, SEO, project management and much more.

Businesses looking for talent include: Autotrader, Apadmi, Delineo, Code Computerlove, Barclays, rentalcars.com, Amaze, McCann Manchester…

View original post 51 more words

h1

Picture Post: Abbey Road Studios Visit

February 15, 2014

Arrival at Abbey Road Studios

A real treat for me this month as I visited Abbey Road Studios for the first time and was treated to a tour around what is pretty much hallowed ground for audio geeks and music fans, both of which descriptions apply to me. Arrival over the iconic zebra crossing and I was met by Jon Eades from Abbey Road who was kind enough to show us around. First up was Studio 3 for a run down on the history of the studios and of the changes in audio tech that have taken place during its illustrious career. The building was first converted into a recording studio in 1931 by The Gramophone Company, later becoming EMI Studios and finally becoming known as Abbey Road Studios in 1970. We’ve had a couple of students on work placement here from our audio courses at University of Salford over the years and one is still working there today.

EMI TG21345 console

EMI TG12345 console. A piece of music history still in use today at Abbey Road (the TG name is from ‘The Gramophone Company’, EMI’s predecessor)

Studio 3 was our starting point and is one of the smaller studios here, mostly used for pop and rock music recording – Pink Floyd’s Wish You Were Here was recorded here.

There’s a lovely analogue SSL desk and Pro Tools setup but the most unexpected feature for me was that at one end of the studio sits a 1970s TG12345, perfect and ready to go. The distinctive audio quality of its circuitry is still so in demand that it is still in regular use at Abbey Road.

In fact the mix of classic old and new is a theme throughout the building. The sound of the mixing consoles developed by EMI in the 60s is still considered so good that sessions at Abbey Road often use these TG desks as part of the signal path. The signal is routed through the TG12345 as part of a Pro Tools recording workflow.

Read the rest of this entry »