Who, Not How Many, Should be the Real Question

An image of a ballot, with the word "vote" written on it.

Whenever we execute user testing, you want to get it right the first time. After all, testing takes time, resources, and effort. And by the time you’re at a point where you notice that you’re not getting the results you want, you may be too far down the rabbit hole to be able to easily adjust.

We often focus on the “how many” when it comes to finding participants, but I’m equally -- if not more -- concerned about the “who.”

One of the great things about user testing is that you don’t need a tremendous pool of people to get actionable results. A relatively few users can provide you with trends and behaviour patterns, which can be extrapolated into larger findings.

This article by Jakob Nielsen from 2004 remains as relevant today as it was 15 years ago. And his findings regarding diminishing returns are similarly valid.

So how many do you test? As Nielsen said, 15 users can get you to a correlation of 0.90, while 30 testers give you a correlation of 0.95. “...certainly better, but usually not worth twice the money.” And to reach a correlation of 0.98, you need 60.

However, technology can help mitigate this risk. In our experience, testing 15 or 30 people using online tools results in a negligible increase in cost and effort. However, if you are doing manual card sorts, then less may be more.

But how many is only part of the equation. The “who” matters so much.

The “who” in user testing appears right in its name -- the users. However, there’s often resistance to cede control over your information architecture to the very people who will be using it. Instead, some organizations prefer to focus on internal structures and prioritize their own staff -- believing they “know what their customers want.”

And, invariably, they don’t. At least not completely.

User testing is about aligning solutions with users needs, desires, and mental models. When that “user” pool is skewed internally, you run into challenges with:

  • experiential bias -- internal users are already familiar with the structure, so will generally fall back on historic patterns;
  • jargon -- internal users often fall into the trap of using internally accepted terminology that may not resonate with actual users;
  • siloing -- internal users may default to classification based internal structures, because that’s what they experience on a day to day basis within their own work experience;
  • ego -- everybody’s project is the most important. While that’s the passion you want to see as an employer, it may not be reflective of what’s important to the end user; and
  • politics -- sometimes internal audience are overrepresented out of a desire to make it seem that everyone is having input and is being “heard.” But if that’s not contextualized and those results overwhelm the end-user feedback, you’re creating a disconnect and doing yourself a disservice.

So... who is the who?

The ‘who’ as in ensuring you have an accurate and equitable representation amongst your user pool.

If your site is intended to appeal to an audience that’s 70 per cent external and 30 per cent internal, then you’re going to want to make sure your participant numbers align, for the most part, with those audiences. If, in the aforementioned case, you have 80 per cent internal responses -- or, worse, only look internally -- you’re likely going to miss out on not only the opportunity to create an intuitive solution for your customers, but also the change to gain some wonderful outside insight that can help inform your efforts.

User research isn’t a net-sum game. There’s no magic formula that gets it right. There’s a need for interpretation and analysis. At its best, user research is foundationally data-driven -- but ensuring you solicit the right data means knowing what matters. “How many?” is always going to be important; but pure numbers aren’t enough -- especially if the “Who?” is wrong.

Categories
Questions Answered

How many people should I have in a card sort?

How do I get good user research results?

SUBSCRIBE TO OUR E-NEWSLETTER

CONNECT WITH US

Twitter Facebook Linkedin RSS