D/UX

View Original

Why User Feedback and A/B Testing Need Each Other

Optimization is nothing new. It can come in many forms but simply put, it’s the process needed to satisfy the growing demands of today’s tech-savvy digital users. Having optimized digital products can be the difference between happy, loyal brand advocates and unhappy users who don’t come back. 

True optimization means building an effortless experience for your users. Whether it’s a website, app or email, your users expect every click, scroll or swipe to take them where they want to go, in a way that feels effortless and intuitive.

Today’s companies know they need be optimizing their digital channels to stay ahead, but how exactly do they do it? Where do they start? And which approach is best?

Some will tell you that there’s no better method than hard, quantitative data from analytics tools, while others will argue that rich, qualitative insights from user feedback is the way to go.

Well, the most effective way to get the information you need to optimize your digital channels is actually a combination of the two. Analytics and quantitative data tells you what is happening, qualitative user insights tell you why.

 

First Things First

The first step in becoming data driven in your approach to UX and conversion rate optimization is to use direct data to determine where your users are getting stuck. By direct data we mean sources like web analytics. This is the logical first step to initiate and drive the ideation process and is a great way of improving your understanding of where to direct your optimization efforts. 

This will give you the information you need to start hypothesising what could be impacting the user experience but what next? Do you simply dive in and start running tests on particular page elements? Well, you could, but it might end up costing you time and resources without knowing the exact reason for the friction. To get a better understanding of how visitors are moving around on your site and to inform your hypothesis, you must pair quantitative with qualitative data.

 

A/B Testing

A/B testing is the go-to optimization process for most companies when they know they have an improvement to make on their website. For example, from analytics, you see a low conversion rate on a sign up page and assume that it’s a particular element on the page that’s causing the problem. 

Using an advanced A/B testing tool like Optimizely allows you to try out different variations of CTAs, images or copy etc. in order to improve the overall success of the page. It works by showing users different versions of the page randomly to determine which is more successful. The original is usually the control in the test with the altered version being the variation. 

By directly comparing the two versions, you can effectively determine what’s impacting the success rate. This is a great way to identify problem areas and can help to inform future design and UX decisions, but how do you know the things you’ve chosen to test are the right ones? Or once you’ve identified several elements to test, how do you prioritise which to test first? 

This is where qualitative data comes in.

 

User Feedback

Using a method of collecting qualitative user insights, like Usabilla, can help save you time by pointing to the area that needs to be tested. It substantiates your test criteria and validates the need for the test in the first place; it can actually direct you to what needs to be tested. This effectively removes the element of ‘shooting in the dark’ and tells you what you need to test. 

If you’ve seen from analytics that you have a low sign-up rate, ask your users directly what’s stopping them from converting. Then, you can move that element to the front of the line to run an A/B test on. The great thing about this kind of feedback is that it might point to something you’d overlooked. For example, you could assume that it’s something simple like the colour or placement of a CTA that’s stopping users from converting when really, it’s something like a lack of transparent pricing information.

User feedback validates and sometimes trumps internal assumptions. Analytics can only get you so far, collecting user feedback is the only way to truly understand why your users do the things they do. 

So you’ve decided what to test based on user feedback and rolled out an A/B test that gives you a clear winner. End of story, right? Well, user feedback can add a final layer to the optimization process by validating the end result. Ask your users directly what they think of the change you’ve made or simply give them a chance to express whether or not they’re happy with that particular page.

Rather than just blindly following the numbers, you’ll be able to read the feedback of a successful variation to understand why it performed better. This allows for informed iterations and faster optimization. As you can see in the image above, Usabilla feedback items will pull any associated A/B experiment you’re running so you can see the direct correlation between the feedback and the test. With Usabilla, you can also filter by experiment, so you can gauge the overall sentiment of that test.

Combining A/B testing with Usabilla also means you can target slide-out surveys to trigger on specific test variations. For example, if you’re A/B testing a big change on your homepage, survey the users who see the variation and ask what they think of it. This will reduce risk for changes moving forward.

 

The Process in 5 Steps

Bringing user feedback into the A/B testing process will save time and resources as you will know what to test, know what to prioritise and know if the end result is the right one.

  1. Use quantitative analytics to initially determine areas of friction. Where are users bouncing? Has a conversion rate become lower on a certain page?
  2. Run user feedback surveys to ask your users directly what’s impacting their behaviour. For example, you could run an exit survey on a page with a high bounce rate that triggers when a user’s cursor moves out of the page to ask them why they’re leaving. 
  3. The qualitative insights you get will point to some things you can test with an A/B experiment and will show you what needs to take priority in the testing process. 
  4. Once you’ve defined the winning variant, you can make the necessary changes to that page or element that was being tested.
  5. Get feedback on the change you’ve made from your users by asking them directly or by allowing them to express their opinions on the overall page. Find out why that change was the right one. Validate the process.

 

This iterative process of optimization will save you time, money and resources. Having the channel to your users always open through user feedback software like Usabilla will mean you always know what you need to optimize for a better user experience, and you’ll have no shortage of things to populate your A/B tests with.

User Feedback and A/B testing are great processes on their own and can give you tangible, actionable results. However, if you want to be truly data-driven (and user-centric) in your approach to optimization, you need to combine quantitative and qualitative sources to deliver the seamless digital experience your users are looking for.