7 Steps for More Effective A/B Testing
As marketers, we’re supposed to be great at running winning A/B tests.
We’re supposed to be able to run a test, put our feet up, and marvel at the success of the B-version.
Yet, it rarely works out that way because, most of the time, our A/B tests fail.
We let them run for weeks (or months!) and when it’s all said and done, the B-version outperforms the A-version only about 10% of the time.
That means we’re wrong 9 times out of 10.
And sadly, that’s not even the biggest problem. The real problem is how much time all of this stuff takes.
Running each test probably takes anywhere between 3-6 weeks on average. And the fact that you’re only winning 1 out of 10 times means that most of us are only right about one time per year.
So how can we do it faster?
Why we’re only right 1 out of 10 times
My leading theory is that we just have too many ideas.
When we want to run an A/B test, most of us go online and search for A/B testing ideas, and here’s what we get:
If I’ve done my math right, that’s more than 295 ideas in one Google search. I’m sure they’re good ideas, but the problem is there’s too many of them.
And what we often do is write them all down in a list, and choose the one that sounds the best.
Once we run that test, we choose another one from our list that sounds good.
I think you can see where this is going.
And this is probably why most of us are only right about 10% of the time. Because just picking randomly like that isn’t going to get us the big wins we’re looking for.
What we need is a process
You’re probably thinking process = boring. But in this case, process = more revenue and more experiments.
Process = More Revenue
When your challenge is having too many ideas, using a process helps you come up with the right ideas, know which ones to choose, and really speed things up.
The core of the process is to find out what’s stopping people from converting… and fix it.
You’re probably thinking that’s the most obvious thing you’ve ever heard. But it’s often the simplest things that make the largest impact.
But how exactly do you find this out? Let’s break this down into a few actionable steps you can use right away.
How to come up with A/B test ideas
The first step is to take your list of 150+ A/B test ideas and throw them away. I’m sure they came from smart people, but they just might not be right for your situation. And starting from a list is the wrong approach.
We’re starting from scratch.
The same thing goes for your gut. You’re a smart, hard-working marketer who knows your product really well. Your gut is good, it just knows a little too much.
It’s what best-selling authors Chip and Dan Heath call the of the curse of knowledge.
We’re all so familiar with our websites and products that we just can’t see them with fresh eyes anymore. We look at our website every single day, and the way that we see it is fundamentally different than the way our customers and first time visitors see it.
And in this case, it really holds us back.
Listen to your customers and prospects
Again, this sounds obvious, but very few of us actually do it.
It sounds obvious, but very few people actually listen to their customers and prospects.
If we want to find out what’s stopping people from converting, then we need to listen to our customers and prospects. They’re the only ones that can show us why.
Let’s talk about some specific ways we can listen to get A/B test ideas that we won’t find on any list:
- Put Qualaroo on your pricing page and ask visitors, “If you’re not going to sign up today, can you tell us why not?” You can do this on any page on you site.
- Look at chat transcripts from when people chatted in with frustrations and confusion.
- Ask members of your sales and support teams “What are the common questions that you’re hearing?”
- Run a user test where you send 3 users through your funnel, to your competition’s site, and to a huge company in your space (click here for the exact process).
- Run a free Peek test, and in about an hour you’ll get a video back of someone using your site
Use a framework to run your experiments
Now that you’ve generated a lot of ideas from your customers and prospects, what you need is a framework to decide what to test and when.
Here’s the framework we use at UserTesting. You can take it, use it, and tweak it to your needs. You don’t have to use it word for work, do whatever is best for your team:
Step 1: Test objective
The first step is deciding your test objective. So before you run any A/B tests, write down the answer to this question in 1-2 sentences: “Why are we running this test?”
Why the heck are we doing this? How many times do you run tests without knowing exactly why you’re doing it? I’m guilty of this myself.
This question will prevent you from running a lot of unnecessary tests and wasting time.
Step 2: Hypothesis
All of these steps are important, but this might be the most important one. And it’s having a hypothesis, or an educated guess, about what you think the results will be. We follow this formula:
If X, then Y, because Z.
For example, if we do this, then that will happen, because of these assumptions.
And it’s very important to write this down.
Step 3: Opportunity size
The next thing to consider when running tests is opportunity size. If this test works, what’s the total impact we could expect to see over a year? Again, I think this is very important and often overlooked.
If you’re running an A/B test that (even if it’s wildly successful) is only going to contribute $50 for the rest of the year, then it might not be worthwhile for you to take the time to do it.
Whereas, if you’re running a test that has a lower likelihood of winning, but if it wins it’s going to have a huge impact on the business, then it might be worth doing.
But more often than not, we run tests without considering opportunity size.
Step 4: Time to test
Next is determining the time involved. How long will it take to run this test?
Is it going to take 1 day, a month, or is going to be a really slow one because you need to get engineering and designers involved, and you need to get sign-off?
Knowing all of this going into the test will help you.
Step 5: Likely scenarios
Then write out the 3 scenarios that you think are most likely to happen.
Most of the time one of those things will happen, so you won’t be surprised.
Other times something will happen that you didn’t account for. But rather than being dumbfounded, you can approach it from a different angle and say, “Oh, I didn’t have that as a likely scenario, because I was assuming that the clickthrough rate would be between 2-4%, and the CTR was actually 20%, which I had no way of knowing.”
And that will inform you for next time.
So instead of being in shock that your test worked (or not), you can track it back to the specific assumption that you were making, which helps you make your future A/B tests better.
Step 6: Next steps
After you run this A/B test, it’s important to consider what you should do next.
If it’s a big success, how can you expand it? What are similar campaigns you can run that will get you similar results?
If it was a big failure (and that’s okay, a lot of tests are) then what can you learn to run better tests in the future?
Step 7: What worked & what didn’t
And lastly, what worked and what didn’t? It’s important to take the time to consider all the work that you’ve done.
So what did you learn? What are you gonna do different on the next test?
Conclusion
Again, these steps aren’t meant to be followed rigidly. We use them on the marketing team at UserTesting a lot, but hopefully you can think through a framework for your own team (or for yourself) that will help you make the right decisions.
So let’s go back to the problem as we’re wrapping up here. We’re only right 1 out of 10 times, which means we’re only seeing 1 winner per year on average. If you can be right 2 or 3 times out of 10, what kind of impact would that have on your business?
I think there’s a big opportunity for us marketers to come up with our B versions by listening to our customers and prospects rather than listening to long list. And through that we can start to have more big winners each year, and really drive our businesses forward.