In Part 1, we discussed the fundamentals of good task writing that apply to any kind of usability study.
In Part 2, we discussed the right way to write tasks for qualitative studies.
In Part 3, we discussed how to design tasks for quantitative studies. Now let’s look at how to ask follow-up questions that don’t influence the answers and spoil your data.
After Quantitative and Qualitative studies you will interview participants with some follow-up questions. Your choice of words can affect their feedback and behavior.
The goal of post-test questions is insight.
Even in a Quant study, if you’re paying attention, you will observe things that surprise you or pique your curiosity. The ability to ask questions of the testers right after a test is one of the biggest benefits you get from doing live user testing. (Platforms like Usertesting.com, good as they are at running a lot of tests without much effort, do not let you ask any follow-up questions.)
With a little bit of planning and awareness, you can avoid some of the biggest pitfalls and gather some wonderful insights with your post-test questions.
Asking the Right Questions After a Test
Quick quiz. Which phrasing of a follow-up question will produce more accurate results? Why?
A: “I saw you were having difficulty with the navigation. What happened?”
B: “Why did you have difficulty with the navigation?”
C: “What was easy or difficult about getting to the content you wanted?”
Question 1: “I saw you were having difficulty with the navigation. What happened?”
Problem: The interviewer rephrases what was observed, which may not be an accurate representation of the user’s experience. To the interviewer or observer, it may have looked like the respondent was struggling with navigation, but she may have been deciding what information was most important, confused by the task, or exploring various areas of the site.
The question also names a user interface element — the navigation — which is a term that users may or may not fully understand, relate to, or normally use.
Question 2: “Why did you have difficulty with the navigation?”
Problem: This question implies the answer and assumes that navigation was the problem.
It also puts the blame on the user, rather than on the site. It focuses the question on the user’s actions as opposed to the elements in the site that may have contributed to the user’s actions.
Question 3: “What was easy or difficult about getting to the information you wanted?”
Improvement: This question steers the user to the topic of interest — moving around the site and finding content — without suggesting terms or feelings to the user.
The user can say it was simple to move around or difficult, without disagreeing with the interviewer. Here the interviewer offers a general frame for the topic of the question, rather than suggesting a response.
Honest, unbiased participant feedback is critical for user research
When we ask questions, we want to learn more about the user’s actions. Why was this piece of content clear? Why did an interface element cause difficulties?
Leading questions are a problem because they:
- Interject the answer we want to hear in the question itself
- Make it difficult or awkward for the participant to express another opinion
This is particularly true in a usability-study interaction, where often the interviewer is the “authority” and many participants will not want to disagree.
Leading questions can result in biased or false answers as respondents are prone to simply mimic the words of the interviewer.
How we word these questions may:
- Affect the user response
- Give them extra clues about the interface
- Yield inaccurate feedback that may not truly reflect the user’s experience, mental model, or thought process
- Alter that user’s behavior for the rest of the session
For example, an inexperienced facilitator asked “What do you think this button does?” in a session and made the user realize that the text she was pointing to was in fact an active link.
Leading questions rob us of the opportunity to hear an insight we weren’t expecting
The more leading our questions are, the less likely the user will
- Comment in a way that surprises or intrigues us
- Make us think about a problem or solution in a different way
They may be good for “validating” designs, but are definitely bad for testing designs.
The phrase “validate the design” discourages teams from finding and following up on UX issues in user testing. UX research must drive design change, not just pat designers on the back.
How to Improve
Some questions that we ask participants are prepared ahead of time. Review any standard questions you will be asking before or after the study, and rewrite them to make them neutral.
It is much more difficult to make up questions on the fly without leading the user. Everyone will make some mistakes in this area. It takes awareness and practice to get better at it.
Watch out for instances where you ask leading questions, or observe others doing so, and see how it impacts the user’s response.
TIP: When I review my recorded test sessions, I am able to notice my own leading questions and see how they affected the responses of the participants. When I review the first recording before I do the other sessions, it makes me much more self-aware for the rest of them.
Four Ways to Avoid Asking Leading Questions
- Do not rephrase in your own words
- Do not name an interface element
- Do not suggest an answer
- Do not assume you know what the user is feeling
Speaking During a Test Session
Ideally, you will never have to speak to participants during the test session. The less you say, the less chance you will accidentally influence the results.
On a remote test, I usually tell them up front, “I won’t be talking to you – I’ll put myself on mute so I can type notes. When you are done with the scenario, just say ‘OK I’m done’ and I’ll jump back in.” This seems to set their expectation that they are really on their own during the session and they don’t ask for guidance during the session.
Live sessions are harder – they can see you sitting there and it is tempting to just ask you for help.
If one of the following happens, you might need to speak up:
- Tester offered some comments
- Tester asked the facilitator a question or is otherwise seeking guidance from you
- Tester interrupted his own flow in some way
You’ll need to decide if it’s appropriate to engage with them, or if you should just wait quietly and expectantly for them to continue working. When in doubt, count to 10 silently. Often they will respond by solving their own problem.
Considerations:
- Decide whether what the user said was a real question that you actually need to answer, or it was a rhetorical question, or just thinking aloud.
- Determine whether the noise or comment that the user made was indecipherable, or whether it was actually enough to draw a fair conclusion from.
- Consider whether you will truly benefit from probing the user further, or whether you have enough information from just observing what he is doing.
Leave A Comment
You must be logged in to post a comment.