Opening the Book on Closed Captioning!

An image of an echidna looking at a computer screen.

We've all heard it: "Closed captioning for this program is brought to you by Company X" but have we thought what captioning truly means and what it can do for those for whom it is of the greatest benefit? On this International Week of the Deaf, I'll give a primer on captioning and why it is so important to implement online.

As you likely have seen, captions appear as text on the bottom of a television or computer screen, giving (hopefully) not only a word-for-word text translation of any speech, but also providing information on ambient sounds such as applause, a jeering crowd, or the creak of a door.

Captioning's original purpose was to assist the deaf and hard of hearing, but it has been anecdotally reported that many who use captions are hearing people who want to enjoy television in noisy environments and those learning the language of the media to be captioned.

There are two forms of captioning: open and closed. Open captions are those that appear by default and cannot be turned off; closed captions appear only if an end-user turns them on. And there are three ways to create captions. Human captioners using stenotype machines, human captioners using voice recognition, or computer-generated captions made without human supervision.

The first requires the expertise of a trained professional who, using a stenotype machine, types shorthand at a rate of at least 225 words per minute to capture every word spoken. These professionals have similar training to court reporters and medical transcriptionists and are certified through the National Court Reporters Association.

Creating captions through voice recognition consists of an employee parroting back any spoken words into speech recognition software, which will create text captions. It is less ideal than a captioner using a stenotype, as it is more prone to errors.

The third, most problematic method for many, consist of software generating captions without human intervention. It doesn't take much Googling to find articles with names like "19 best captioning fails" using such a method, but it is a popular one for many video producers. This article from Quartz drives home the point that machines lack context and subtlety, and can miss rapid-fire exchanges or multiple voices at once.

And captioning, as I discovered reading this article about audio accessibility, is not just for video. Somehow I hadn't considered the prospect of my favourite podcast or iTunesU lecture being captioned or having a transcript; but of course it makes perfect sense. When I asked National Captioning Canada about the state of audio captioning in this country, I discovered that no serious efforts were under way. It was something they wanted to do, but no organization was yet willing to pay for it.

This is all very interesting, but what does it have to do with the digital world? Well under Level A of WCAG 2.0, all pre-recorded audio and video must be captioned; this stipulation is broadened to include live content in Level AA. This means that the demand for captioning of all types of media should explode between now and 2021 when Level AA compliance is expected by the AODA. It may be best to get ahead of that curve now.

The TV ads about closed captioning make it look ubiquitous, but I have discovered that there is much to be done to bring about more captions, and better yet, captions of a higher quality. I hope going forward that you will consider this when creating your next company video or multimedia project.

Questions Answered

What is closed captioning?

SUBSCRIBE TO OUR E-NEWSLETTER

CONNECT WITH US

Twitter Facebook Linkedin RSS