The (Sometimes) Inexact Science of Measurement
I was at a shortlist presentation yesterday when asked about web success metrics and examples of successful metrics that we've helped some of our other clients achieve. To be honest, this question caught me a bit off guard. Not because we don't discuss or plan for success metrics on our projects, but because success looks very different for every one of our clients.
For one client a mobile-first design approach built on an easy-to-use content management system might be considered a great success. For another, a 75 per cent increase in a specific type of conversion is success. It really does vary.
What's most important is that we define what success looks like for your organization. How do you measure success? Does A+B always equal C? Or are the algorithms and influences more nuanced?
With web projects, it can sometimes be challenging to identify whether something worked. Yes, we can measure tangible items like page or load speed; yes, we can see how site structure impacts findability of content, and, yes, we can talk about how user experience improves.
But can we definitively say that this particular action led to a sale? That’s much more challenging to do.
For example, think of the last time you bought a pack of gum. Maybe it was an impulse buy at a pharmacy counter. So was that the definitive marketing tactic that got you to buy the product? Or were you influenced by multiple tactics -- billboards on the street, commercials on TV, and your friend sharing a piece of his or her pack with you (advocate-based experiential marketing, obviously).
That’s the challenge with making absolute determinations about content marketing efforts -- usually it’s a comprehensive environment that leads to success, not just one particular input or action.
So how can you effectively measure success? There are a few tactics you can use:
Definition of Success
The first thing is define what success means. Is checking off a few boxes for key deliverables success? For example, does a rebranded site, with quicker page load speeds, that is mobile friendly count as a success? For many people that would be the case.
When we look at improving a user experience, it’s often about removing barriers. Whether we talk about accessibility in general, or improving findability of content through improved search and faceting, then those are successes in and of themselves.
Does that mean an automatic increased conversion rate? Maybe, maybe not. But it’s success.
Redefinition of Failure
Especially on the web, there are too many truisms, which are not actually true. Many companies are quick to point to metrics like “increased page views,” “increased time on a site,” or “reduced bounce rate” as positives. It’s easy to influence those numbers -- and it doesn’t always mean that the impact has been positive.
For example, a well-structured site, with intuitive IA and proper tagging could result in a much lower number of page views and a much higher bounce rate. Why? Because you’re getting people where they need to go, right away. They’re searching for content and finding it immediately. You’re servicing their need instantly without them required to go elsewhere.
Set a Baseline
The best way to measure success is to define a baseline. If your goal is to improve the customer experience, then you need to be able to define what their current level of satisfaction is right now. A simple user satisfaction survey, which establishes those baseline metrics, allows you to measure and adjust as a new solution is developed. Six months post-launch, you can run that survey again and see how the numbers match up. If they’ve improved, then you know you’ve done your job.
Isolate Influences
If you want to measure whether your website is improving an action, then it has to be isolated. In any sort of testing, to be able to fully attribute a change to an action, that action has to be performed in isolation. Sure, your focus may be a new website, but if you compliment that with an aggressive advertising campaign, along with other activities, then how can you confidently attribute change to any one item.
This also allows you to test individual elements on your page. Want to see how a call to action works? Only change that and measure the results. Isolate your influences and you can be confident in your results. Otherwise, it’s mere speculation.
How do you measure success? We’d love to hear your experiences and best practices. Let us know in the comments below.