When an organization gets support for user testing, it’s time to celebrate! Jump right in, find some users, and run the tests—right? Actually, it turns out there’s a lot more that goes into preparing for user testing. A little extra time spent planning and preparing at the start will save time, money, and headaches later on.
A few months ago, I outlined a wide range of common objectionspeople often have to user testing. The goal was to get designers past the interoffice barriers that so often come up before user testing. But selling the team on the idea is only the first step. In order to successfully learn about users and prove to stakeholders that they made the right choice (and should continue to fund testing in the future), we need to prepare.
As with most big projects and endeavors, do-overs will be extremely costly, and they can easily be avoided. Not only does prep time save time and money in the long run, it also helps us to test more accurately. Taking a few hours or days before launch to identify the right testing audience, goals, budget, timelines, and KPIs will insure that the test is successful and provides measurable results to share with stakeholders and teammates.
In this article, we’ll focus on three things that can make or break a usability test: the audience to test, the goals for the test, and the KPIs we’ll use to measure success.
Test the Right Users
Whether we’re testing an app, website, or even a new patent-pending bagel maker, there’s a specific group of people who will best represent the target audience. Some companies already collect a lot of data about their customers, but even then, how do they know if they’re collecting the right information? What information is useful to learn about the target audience?
Identify the audience
Analytics will show us our website visitors in numbers—per day, per month, and year-over-year. But to get useful results from usability testing, we need to make site visitors more human. We need to understand what region of the country (or globe) they live in, what language they prefer to surf the web in, what their areas of interest are, what types of products they purchase online, and so on. All of this information helps us to select the right assumptions to test.
One way to get a clear idea of the target audience is to run a series of on-site screening questionnaires or email a quick demographic survey to current users. Content strategist Lindsey Gates-Markel outlines a good, high-level strategy for identifying target users or audiences in her presentation on content for actual users (beginning on slide 25):
- Identify the one or two goals a user will have for your website, such as learn, purchase, find advice/support, complete a task, or share
- Explain in concrete actions what the top-priority goals are for your users, i.e. share bagel recipes, find bagel maker reviews, purchase a bagel maker…
- Look at your site to see who is currently achieving these goals; check popular pages, behavior flows, and search terms in Google Analytics and other tools for indicators
- Make a big list of all the potential users who could benefit from your site, such as chefs, bagel enthusiasts, cooks, those who can’t cook, bagel companies, or food critics
- Divide those users into three categories: Primary, Secondary, Tertiary. Be honest and clear with yourself — which users do you actively draw or wish to draw more of? Those are Primary users!
These five steps provide a better picture of who uses the site and who could benefit from using it. And that picture comprises the target audiences: the best people to test specific ideas or functions against. For instance, the chefs would be an ideal audience for testing high quality reviews of many brands of bagel makers; the bagel enthusiasts would be a good audience for testing best recipes. The next step is to draw customer profiles for these audiences, and determine what assumptions to test.
Customer Profiles
You have probably heard of content strategists or research teams delivering personas to stakeholders. Personas help the team group website users by the emotional state and context surrounding those users’ actions. Customer profiles are more catered to purchase paths and goal completion: a customer profile identifies users by their researching/shopping/buying habits, rather than their emotional state or historical context.
To create customer profiles, look for patterns of actions or purchases the website users exhibit. For instance, if users who purchase the bagel maker often (76% of the time) purchase a bagel recipe book at the same time, they are one category. Look at:
- the types of purchases made,
- the number of purchases,
- the items they combine in a purchase,
- which services they sign up for,
- the resources they use to support a past purchase,
- or the resources they use to educate themselves prior to a purchase.
It’s easy to get carried away in identifying patterns, so keep it high-level (or testing could take 100 years). For instance, if there are 6 categories, like Purchase Multiple Items, Purchase Single Items, Research Only, Research and Purchase, Purchase and Customer Support, and Customer Support Only, it makes sense to combine those into broader categories like Purchasing, Researching, and Post-Purchase Support. The goal of customer profiles is to identify in large, broad strokes the main user actions, and their general outcomes. Laser-like focus on individual actions isn’t necessary for the first several rounds of testing, although later tests may benefit from a more specific approach.
For each customer profile, briefly identify their device preferences, major questions or concerns, and a few goals they may be trying to accomplish. One simple way to do this is using the old writing adage: who, what, where, why, when, and how. Answer these questions about each profile:
- Who is this person (in their minds)?
- What are they trying to accomplish on my site?
- Where is the first place they will look?
- Why would they click this over that?
- When do they choose to make a purchase from me?
- How do they prefer to gather information?
Outlining each customer profile on a separate sheet is a good idea—they are good directional checks along the way as tests are developed.
For larger teams, it may also be helpful to outline these customer or user profiles in large format and hang them where the team can see them. My team at Clearlink did this for a software design project, and I was surprised how many times a developer would check his idea for a new field or function against the goals of one of our user profiles!
Set Clear Goals
When we set out to test, a lot of times we already have a hunch about what’s off or not working—otherwise, why would we decide to test in the first place? Take that hunch and outline up to three clear goals to test. When observers know in advance what to look for, they’ll get more out of watching the user tests.
How to Make a Goal
A solid goal includes a hypothesis (what we believe is happening), an educated guess as to the cause (why we believe it is happening), and a clearly defined outcome the team expects from the test. Testing the hypothesis forces the team to recognize assumptions they may have been making along the way—in the design and dev process, or even in the marketing approach. A good hypothesis will call into question something the team considers a “given,” like “everyone understands what ‘submit’ means” or “Captchas make people feel secure.”
Goals should look something like this:
- I think X is happening because users want Y but don’t realize we offer it.
- No one is clicking on Y – I think the layout is confusing. This test should reveal if the layout is keeping users from clicking Y.
- This page doesn’t get enough traffic/engagement because it’s [too wordy, not clear enough, lacking a CTA]. This test should reveal why so few users are engaging with the page.
It’s really easy to skip this step or settle for processing it mentally, but even a team of one should write out the goals, both as a reminder of what you are searching for, and to help stakeholders or future team members understand the process.
And remember: it is totally ok to be wrong on the outcome. This is the most fun part of user testing – users are unpredictable, emotional creatures, and learning directly from them what’s wrong with a site or app is always an adventure. It’s ok to fail, just fail gracefully – review the goals and evidence against users’ input and find out why the hypothesis was wrong. Sometimes it’s as simple as “you are not your user.” Learning why a hypothesis was wrong can be as productive as being proven right!
Find the KPIs
So we have an audience in mind, we have goals for the test we’d like to administer — now we have to decide what and how we are going to measure to determine success. KPIs (key performance indicators) are any metric or measurable you can reasonably rely on to indicate change. For websites, these include (but are not limited to) new vs. returning users, conversions (such as button clicks, email submits, form fills, or phone calls), and bounce rates. For apps, these include downloads, opens, per day interactions, in-app conversions, reviews, and so on. And for e-commerce or services, they might be inquiries, purchases, conversions, reviews, comments, ratings, and so on.
For a usability test, once we’ve outlined our goals, the next step is to clearly outline a few KPIs that will indicate the test’s success or failure. For instance, if users are being interrupted with a pop-up offer or email request, we should watch the bounce rate of the page, the actual conversions on the pop-up, and how many people exit out of the pop-up with no conversion. All of these will indicate how well-received the pop-up is. If we change the design on the bagel maker, we can watch purchase numbers, return numbers, comments and reviews, and ratings as metrics of success.
Measuring test success is as simple as watching KPIs, noting changes, and determining if the testing goal was correct or off-base. Like we said before, no goal is a bad goal—the results sometimes prove that you are not your user. If a test fails, then there’s all the more reason to pivot iterate and continue testing.
Next Step: Launching a Test
Now that we’ve learned how to overcome barriers and internal objections to testing, and we’ve outlined the audience for testing, the goals around the test, and the KPIs by which we’ll measure success…what’s left? Testing! In the final article of this series, I’ll highlight tests from a range of tests that companies can execute themselves. I’ll talk a little about popular softwares currently used for testing, but also about guerrilla testing — how to DIY the usability testing scenario.