Interpreting Your Accessibility Scan

echidna perched over computer screen showing icons for deaf, low vision

We all want validation. Whether it’s a numeric grade, a hearty thumbs-up, or a big green checkmark. This is especially important when you’ve got a major milestone and you want some sense of validation that you can present to management -- you know, the easy-to-read score that shows you’re doing the right thing?

The problem is, with accessibility, there’s no one-size-fits-all approach that works. And compliance can be a matter of managing interpretation challenges -- especially when it comes to automated checkers.

Simply put, automatic checkers can not appreciate “intent.” They only flag what they see (or don’t see), and suggest something may be a problem. The human element that actually interprets what is and isn’t an error, is far more vital.

Auto-checkers can show you the path. But only by walking it can you determine whether or not the pathway is actually accessible.

In Ontario, we’re fast approaching January 2021, which is when companies need to ensure their website content meets WCAG 2.0 AA compliance. Other jurisdictions have similar deadlines and requirements. Accessibility is increasingly a concern for clients who want to ensure that they’re both aligning with industry best practices and any legislative requirements that apply to their jurisdiction. However, admittedly, accessibility is not a black-and-white issue and it is that grey area that can cause concern, frustration, and consternation.

Full Automation Does Not Exist

As much as we would love to point to one piece of technology and say, “This is the absolute source of truth,” the fact of the matter is that there is nothing that allows for that level of certainty due to the fact that certain accessibility guidelines are up for interpretation.

We use a number of services to scan for accessibility errors. At the moment, the one we feel best aligns with our customers needs is Dynomapper. Dynomapper is a web accessibility tool that can scan an entire website for WCAG / Section 508 compliance. However, even though we feel this is one of the best-in-breed software packages, it still is limited by the code that is provided and the limits of its interpretive ability.

Essentially, any automated solution can only flag items that it “thinks” are errors. 

For example, a scanner may flag an image for review because it believes a colour contrast issue “may” appear in the photo. It’s not identifying definitively that it exists, but rather that it needs human validation for the test. The same can be said for “alt” text inclusion on images. A screen reader will not be able to identify whether an image is a logo or not -- even if the title is logo.jpg. However, it will read the alt text. Dynomapper, for example, will flag this as a potential error to the effect of, “Alt text is not empty and image may be decorative.”.

Errors Aren’t Flagged, They Just Exist

On the other side, if you are using an adaptive technology - like a screen reader - to test the user experience, note that it will not shut down, flag an error, or stop working when it encounters a WCAG compliance error. 

For example, if you used <b></b> markup to identify “bolded” text as opposed to the WCAG 2.0 Level A compliance-required element <strong></strong>, then the screen reader will simply read the content without any notification of additional emphasis. Is it an error? Yes, in the strictest sense of the word. However, it can be argued that the user using a screen reader would only miss out on emphasis, not content or context. 

The best example, from a screen reader perspective, comes from linked content. WCAG 2.0 AA compliance states that all text links must provide context, as the majority of screen readers do not provide access to the links in the paragraph where they are placed in HTML. Instead, they default to the “end” of the page when reading and then provide a list of link options. If you have chosen to use “click here” for five links on a page, then the screen reader will read out five “click here”s, absent any context. 

The adaptive technologies won’t flag these WCAG 2.0 violations in the context of the content, but will simply present the information as best it can, given the context provided by the user.

High Tech, High Touch Approach Works Best

So what does that mean when you’re looking at a Dynomapper results page that shows a 13 per cent success? To be honest, the number itself doesn’t mean much in context. With Dynomapper, and with many other automated scans, the true value is in providing context for the errors and the “errors.” And that requires human involvement.

For example, Dynomapper breaks down content into three key buckets: Known Problems; Likely Problems; and Potential Problems. However, even the “known” problems aren’t exactly definitive as the aforementioned <bold>/<strong> error is flagged as a known problem.

Some services, like Dynomapper, will allow you to export a PDF or XLS file that contextualizes the pages, so you will see “Fail” and “Conditional Pass.” Those listings tend to be more valuable as it contextualizes the likely and potential problems as not violating compliance metrics.

Ultimately, there is some human legwork required with any service to interpret and contextualize the results. If a likely problem is a logo image that has alt text included -- WCAG 2.0 suggests that logos are exempt from alt text -- then a user can archive this as not actually being an error. In fact, if you are adding additional contextual information for accessibility, that is a positive, not an error.

As part of any accessibility review, the scanner can offer an efficient tool that allows you to flag potential problems. It requires a human with an ability to cross-reference WCAG 2.0 and interpret the errors to determine whether these are, in fact, errors. As an added bonus, Dynomapper allows you to “archive” those errors once you’ve determined that they are, in fact, not errors, which should preclude them from appearing in the future.

What are the Consequences?

First, let us begin with the caveat that this is not meant to be a document that offers blanket indemnification of actions -- nor is it necessarily applicable across all jurisdictions. However, our interactions with the Province of Ontario have proven that AODA is not intended to be punitive -- it is intended to be collaborative.

We are all working towards a more inclusive society and, from a business perspective, we are working to create and distribute content in the most effective way possible. WCAG compliance helps us do that. 

While there are those who will speak of fines and threats of litigation, realistically, if you are making your best efforts to be proactive in your accessibility efforts -- and that you are responsive to any errors or challenges that are brought to your attention -- then you are embracing the spirit of AODA, ADA, or any other jurisdictional legislation. 

Fines and punishments have been few and far between -- and those have only been to the more flagrant and repeat violators of accessibility laws. The process, such as it is, will be that you will likely receive warnings from the jurisdiction before any penalties are assessed and you will be given an opportunity to make changes. More likely is that you will have an end-user or private citizen flag an issue for your attention -- and that gives you an opportunity to solve an end user’s problem and, likely, improve that person’s impression of your business through your responsiveness.

Conclusion

We really wish there was a simple, one-size-fits-all approach to accessibility evaluation that would simply spit out a big green thumbs up -- but that’s not the case. WCAG 2.0 has many clauses that allow for interpretation, or even to be ignored, based on the type of content and its audience. With no blanket, black-or-white, structure in place from the legislation, it’s impossible to, in turn, create a tool that will provide that type of evaluation. 

Instead, accessibility compliance requires a high-tech, high-touch approach that combines technology to flag potential errors, and an informed human to assess and evaluate the reports that come in. Your web development partner should provide guidance to you on this subject.

Technology can show you the map, but you need to walk it yourself to ensure it’s truly accessible.

Tags
Questions Answered

What is the best way to test for web accessibility?

SUBSCRIBE TO OUR E-NEWSLETTER

CONNECT WITH US

Twitter Facebook Linkedin RSS