How to choose a user research method / by Gavin Lau

1*be3MFkdohnVKr0ywbbVE-w.jpeg

“What’s your process?”

This is a question that you’ll often get from prospective clients, or in an interview for a UX design position. Of course, your exact process will end up being at least a little bit different for just about any UX design project you undertake. Whatever your full answer may be, as I recently wrote, user research has to be a part of that process. In most cases, you just can’t afford not to do it.

What’s less clear is what kind of user research you should do. There are a lot of different techniques to choose from, each with different strengths, weaknesses, and research goals. This article is intended to give a quick rundown of how to decide on what your research goals are, and what techniques you can use to achieve those goals.


Understand the goal

The most important question to ask yourself before deciding on what research methods to use is: What do I want to know?

How do you go about answering that question? You could start by asking yourself why you want to learn that thing. Identify what you already know about your users, and what you don’t know about them. What are your knowledge gaps? Some examples of useful questions that you may or may not already know the answers to:

  • Who are the users?
  • What are their behaviours, goals, motivations, and needs?
  • What assumptions have you made about them?
  • How do they currently use your product?
  • What other products do they use?
  • Where do they have problems with their workflow?
  • Do they like using your product?

Once you know what you’re trying to learn, you can start thinking about how to learn it.

One way to break down the methods, suggested by Christian Rohrer, is on two axes: Attitudinal vs. Behavioural studies, and Qualitative vs. Quantitative studies. In attitudinal studies, you’re trying to find out what people say about a subject, while in behavioural studies, you’re analyzing what people are actually doing. Qualitative methods tend to be stronger for answering “why” types of questions, while quantitative methods do a better job of answering questions like “how many” and “how much”.

0*m40e5pRUOAigMC1o..png

For example, if you’re asking the question: “How many users give up partway through our sign up process?”, then you might want to consider a more quantitative, behavioural study.

Tomer Sharon has offered a different framework for categorizing research questions, which I’ve found very useful. He suggested that there are broadly three types of questions that UX research is helpful for answering:

  • What do people need?
  • What do people want?
  • Can they use it?

You need to understand what your product should be doing (the needs and wants of users) before worrying about whether the product is doing that thing correctly (usability).

So, which techniques can you use to answer each question type? I’m going to break some of them down below. This is by no means an exhaustive list of user research techniques, and some of these techniques could be used to answer more than one of the question types, but nevertheless it should provide a good starting point.

A quick aside: Before starting any user research project, it’s important to keep a couple of things in mind:



What do people need?

Contextual inquiry

Also called observation, or a site visit, contextual inquiry involves studying people as they go about their everyday lives or tasks. If you’re performing a contextual inquiry study on engineers who produce intelligent piping and instrumentation diagrams, you would go to their office and watch an engineer go about their job. Have them show you how they do things. Ask them questions such as “How do you do that?” or “Can you show me how you did that?” to try to dig more deeply into how they’re accomplishing their tasks.

Observe how users typically accomplish their tasks (Image from Pexels)

Observe how users typically accomplish their tasks (Image from Pexels)

Take notes about what they’re doing, what difficulties they face, and your own thoughts on it all. You want to try and get into their head to help understand their needs and expectations, but try not to interrupt them more than necessary. As much as possible, you should be trying to observe what would they typically do to accomplish their tasks.

I’m going to lump ethnographic research in here as well. Ethnographic research tends to focus more on social groups and how they collaborate or interact together, but it has a lot of overlap with contextual inquiry and can be useful for answering some similar types of questions.



Interview

In user interviews, you’ll typically meet with users, one at a time, and ask them questions relevant to your project. Usually this is done quite early in the process, and it can be useful for reviewing your product goals. You need practice to get good at interviewing users. Questions often have to be asked in the right way to get good responses, you have to know when to follow up and dig in to an answer, and you have to be able to listen well. You should definitely have an interview protocol or script prepared beforehand.

Interviews can be very powerful, but take skill to do well (Image from Pexels)

Interviews can be very powerful, but take skill to do well (Image from Pexels)

Remember that during an interview, users can often make up an opinionthat they don’t actually feel strongly about. They can also talk a lot about things that don’t actually matter to them, which can be misleading.

One technique that can help with this is called the critical incident technique. Your interview subjects may remember some specific cases where the product worked particularly well or poorly, and can often provide more vivid details about these incidents. You can use this to get an idea of the strengths and weaknesses of your product when it comes to helping users accomplish their tasks.

You can also use interviews to help you identify questions to ask in a broader questionnaire or survey. On the other hand, you can use interviews after you’ve seen the results of a questionnaire and want to dive into some of those questions more deeply.



Surveys and Questionnaires

Surveys and questionnaires can provide you with some answers similar to what you’d get from user interviews. The downside is that you don’t have the ability to dive more deeply into those answers as there’s no direct interaction with the users. On the plus side, they allow you to get a larger volume of responses, which can open up the opportunity for more quantitative analysis.

Just like in user interviews, you need to be careful about how you’re writing your questions. You’ll want to keep the survey as short as you can while still getting the information that you want; if the survey is too long, you may find that you don’t get as many responses as you’d like. Surveys can be relatively inexpensive to run, and there are a lot of survey tools out there to choose from.



Diary Study

A diary study can be used to see what users do and how they interact with your product over a longer time frame. Diary studies can often be used as a follow up to a contextual inquiry or an interview, to get some additional information from some of the more engaged and informative users that you encountered.

In some cases you may ask users to take photos, scrapbook, or other similar activities. You have to make sure that you give the users good instructions so that they’re clear on what they should be doing, and follow up with them at certain intervals to try to keep them engaged.



What do people want?

A/B testing

In A/B testing, you create two variations of an element of your product, such as a registration form. The variations could be represented by anything from a simple paper prototype to a live website. You then define a metric that will measure the success of each variation that you’ve created. For example, you could measure the bounce rate or the NPS of a landing page.

Two variations of an apple (Image from Pexels)

Two variations of an apple (Image from Pexels)

Next, you run an experiment with users where one group sees “Version A” and another group sees “Version B”. Using the metric you defined, you’ll measure which version was more successful. That’s the one the users want. You can do this process with more than just two variations, in which case it’s called multivariate testing.

You can use A/B testing to examine a wide variety of things, such as:

  • CTA wording, size, colour, placement
  • What images you’re using
  • The amount of text on a page
  • Layout and style
  • Typography
  • Product descriptions

A/B testing is an excellent and powerful user research tool. There’s a lot of nuance to doing an A/B test correctly, so make sure you’re designing your study appropriately and that you’ve brushed up on concepts like statistical significance so you know when to stop the study.



Rapid prototyping

The idea behind rapid prototyping is to get your designs in front of users early and often. Rapid prototyping is done iteratively, in a three-step process:

  1. Prototype — Create interactive mockups of your interface
  2. Review — Test the prototype with users
  3. Refine — Make adjustments based on feedback
(Image from Smashing)

(Image from Smashing)

The more you can do this early on, the less you’ll (ideally) need to make changes during development, when it can be far more expensive.

The fidelity of your prototypes can vary. You might start out with paper, and end up with pixel-perfect high-fidelity mockups. In some cases you might even use some live code.

If you’re going the paper route, you can put together a sketching kit and start drawing. You can use products like UI Stencils to give you a head start if you feel like you can’t draw particularly well. You can also use an app like Marvel to make your sketches interactive.

Paper prototypes allow you to test your designs quickly, cheaply, and easily. If you have a real budget crunch, this can be a good way to go. However, it’s worth keeping in mind that it can be distracting for users if your prototype is too low fidelity. Jake Knapp actively discourages paper prototyping with users, saying that:

“If the product doesn’t look real, the customer response won’t be real”

Depending on your skill level with your design tool(s) of choice, you might be able to jump directly to high fidelity prototypes. Whichever direction you take, the important thing is to get prototypes in front of users, learn from how they respond to it, and iterate.

Of course, this isn’t always going to be possible, as some systems are just very difficult to prototype. At ThinkUX, we worked on a VR project where we simply weren’t able to meaningfully prototype and test some parts of the system before implementing them in code.



Focus groups

I could easily write an entire article about the pitfalls of focus groups. There’s no shortage of such articles out there. Among the primary issueswith focus groups are observer dependency, which is when the researcher reads their own feelings into the results of the group discussion. There’s also groupthink, and the fact that some particularly vocal members of the group can make it seem like there’s a consensus even when some quieter members of the group disagree. Further, it’s worth emphasizing once again that there can be a significant difference between what people say and what they do.

Consensus achieved? (Image from Pexels)

Consensus achieved? (Image from Pexels)

However, when done correctly, focus groups can uncover a lot of very useful information. One of the most interesting results can be discovering a customer language that can help you to understand and describe similar experiences that your users shared. Having a group of users together can also help them to jog memories and ideas in each other that they may not have otherwise remembered.



Can people use it?

Usability test

According to the Nielsen Norman Group, if you do only one type of user research on your project, it should be qualitative usability testing. In usability testing, you recruit some users, come up with a list of tasks for the users to accomplish, and set them loose on your system (or prototype). You can do this formally (create a screener, schedule participants, have them come into your lab, record the session, etc.), or with guerilla usability tactics.

You should test with approximately 5 users, as beyond that you’ll start to see diminishing returns. At the conclusion of a usability test, you’ll often find that parts of your design worked, but that users uncovered problems that you’d never have thought of, and a lot of things need to be tweaked or reworked. This is why it’s so important to test your designs with real people.



Card sorting

Card sorting is a technique that can help a lot with your information architecture. Write down the major features or topics for your system on cards, then recruit some users and ask them to organize the cards into categories that make sense to them. You can do open sorting, where participants put the cards into groups and then name the groups themselves, or closed sorting, where you give them defined categories to sort the cards into. The results of a card sorting study can help you to decide the structure of your website, how to label your menus, how to group your content, and so on.

Card sorting can be done in person with index cards, or using an online tool such as Optimal Sort. If you’re doing the study in person, the most difficult part will likely be analyzing the results, particularly if you’ve recruited a large number of participants.

Card sorting equipment (Image from Pexels)

Card sorting equipment (Image from Pexels)

You’ll typically recruit more users for a card sorting study than with a usability test, but don’t go overboard here as you’ll tend to get diminishing returns beyond 15 or so users.


Tree testing

Tree testing is another research technique that will help you to assess the information architecture of your product. It can (and likely should) be used along with a card sort. Tree testing helps you to answer questions like:

  • Can people easily find information on your website?
  • Do your menu/category names make sense to your users?
  • Is information categorized in a way that users expect?

In a tree test, users navigate the site to complete tasks (e.g. “buy a sweatshirt”) using only links — the user interface is stripped away entirely. If, through previous research, you’ve found that users aren’t reaching an important page on your website, a tree test can help you to determine if the issue is caused by a problem with your information architecture or by something to do with your UI.

At the end of the study, you’ll end up with metrics such as task success rate, failure rate, time to complete the task, and what routes users took through the site tree before selecting an answer (correctly or incorrectly).



Wrap up

Before you can decide on what research techniques you’re going to employ, you need to figure out what you’re trying to learn, and why you want to learn it.

Again, this is not an exhaustive list of UX research techniques. There are a lot of different ways to attack a given problem, but this article should help give you some idea of what techniques you can use in your research, and what kinds of questions they can help you to answer.

 

Source: https://uxplanet.org/how-to-choose-a-user-...