Skip to navigationSkip to content
NO EXCUSE

The podcast industry is failing the deaf and hard of hearing community

A woman sits alone on a bench while using her mobile phone as the sun sets at a beach.
Reuters/Dinuka Liyanawatte
Disability advocates are fighting to make podcasts more accessible to the hard of hearing.
Published Last updated

Podcasts are quickly becoming a major part of the American media diet, with more than 100 million people listening to them regularly.

But a large group of content consumers is being left out. At least 37 million adults in the US have trouble hearing, and that’s not counting people who might struggle to understand audio content because they’re not native English speakers.

Kahlimah Jones, a frustrated deaf consumer, sued podcast network Gimlet Media earlier this year, arguing the company’s website is equivalent to a restaurant or a pharmacy, and should therefore provide accommodations to people with disabilities under the American with Disabilities Act or the ADA.

The case is still pending, but it’s already shining a light on podcasts’ lack of accessibility. Closed captioning doesn’t really exist in podcasting, and while some podcast producers, including NPR, and StoryCorps, offer transcripts, they are few and far between.

There are several reasons for this. Turning audio into text can be expensive, and most listening apps are not set up to support them. There are no regulations spelling out whether podcasts should be accessible, let alone who should be responsible for getting them there. But that could change as the podcasting industry gets bigger, and attracts greater scrutiny from the disabled community.

“Podcasts are exploding as a form of storytelling and journalism and everyone deserves to enjoy them,” says Alice Wong, a disability activist and podcaster. “Access to information, media, and culture is a human right.”

From advocacy to regulation

People in the deaf community already pushed through major changes in the TV industry, where closed captioning is now mandatory under US law. They first created a proof-of-concept by captioning films themselves, and later worked with both the US government and broadcasters in the 1970s and 1980s to spread the practice. By 1980, several TV networks offered closed captioning in select shows and news broadcasts. At that time, deaf and hard-of-hearing people had to purchase expensive decoders to enable the service. But in 1990, the government changed the law to require every TV set to include a captioning decoder.

The Americans with Disabilities Act, which passed in the same year, mandates other accommodations for disabled people in public spaces and protects them from discrimination.

But the ADA, and later regulations passed during the Obama administration, don’t fully address accessibility in digital content, so it’s been up to courts to decide whether and how it should be implemented. In 2012, a court found Netflix and other online streaming services qualified as “places of public accommodation” under the ADA; Netflix settled the case by agreeing to make video content accessible for deaf people. Jones is making the same argument for podcasts in her class-action suit against Gimlet. Spotify, which owns Gimlet, said it can’t comment on pending litigation.

Closed captioning in radio, perhaps a closer parallel to podcasting than TV or streaming video, hasn’t really taken off. NPR and others have tried to implement closed captioning in radio, though it’s not required by law.

Some deaf advocates doubt any podcast inclusivity will come via the ADA or closed captioning.  The law is 30 years old, and not up to the task of keeping up with new technologies. “The foundation is not strong enough,” said Ahmed Khalifa, a deaf digital marketer and accessibility advocate behind the HearMeOutCC online resource.

Since closed captioning is not prevalent in the podcasting industry, it would be difficult and expensive to roll out. Making podcasts accessible through transcripts would be much easier, advocates say.

Who is responsible for accessibility?

This is an unsettled question. Podcast producers can already add transcripts to their webpages without having to wait around for listening apps to provide a space for them. But most podcast consumption happens via the apps. The companies that own them also have heftier budgets and more tech resources than independent producers, and are in a better position to develop the necessary technology.

“Spotify is investing millions and millions in Gimlet,” said Khalifa. “It’s a shame that they don’t think about those who can also benefit from accessibility.”

Spotify told Quartz in a statement it offers transcripts for some of its shows, including the Michelle Obama podcast, and helps other producers make shows accessible for the hard-of-hearing community. “We want as many people as possible to be able to use Spotify in their daily lives, so they have the ability to navigate the app using assistive technologies and we will support accessibility settings wherever possible.”

In the absence of regulations, it ultimately comes down to who is willing to take on the cost of turning audio into text. With rates as high as $1.50 a minute, the cost of transcription services can add up.

“Most podcasts are not making any money and the extra expense, even if it is low, keeps them from doing it,” said Matty Staudt, president of Jam Street Media, which produces podcasts for brands, and formerly of Stitcher and iHeartRadio, in an email to Quartz.

Others simply don’t know what the deaf community needs and wants from podcasts, he added.

Just as with closed captioning technology, transcription is getting better and cheaper over time. These days, AI technology can do it automatically, if not flawlessly. Speech recognition often can’t accurately describe noises, or transcribe those speaking with accents, different tones, or languages.

Despite those issues, Khalifa and other advocates say podcasters could use AI programs to do half of the work, and then have a human clean up the text afterwards.

Improving on the transcript—and beyond

The drive to make podcasts accessible could also push the industry to innovate beyond the transcript, advocates say. Some podcasters are already getting creative with article-style transcripts and with video versions of their shows on YouTube, which offers automated closed captioning for free.

As for interactive transcripts, along the lines of the closed captioning Jones’ lawsuit is requesting, there are few paid services, like Otter.ai, that give audiences the chance to read along with synced text as the audio plays. It is, however, another expense for producers to take on.

The easiest route, perhaps, would be for the industry to tap into technology the big listener apps already own. Both Google and Spotify—through its production tools Soundtrap and Anchor—have tools to turn audio into text for other purposes, like backend editing or SEO. But they could be adapted for audience-facing accessible features.

With all these tools, making podcasts inclusive might not be as insurmountable a task even without regulations.

“There is no excuse for major podcasting platforms to not provide high-quality, accurate human-produced transcripts/captions for every podcast,” wrote Wong, who is also editor of Disability Visibility: First-Person Stories from the Twenty-First Century. “If independent podcasters who crowdfund and budget carefully like myself can produce transcripts, so can these companies.”