Part 2: Be careful what you ask for: Designing Tasks for Qualitative Studies

Keep Exploring :/, User Experience/Part 2: Be careful what you ask for: Designing Tasks for Qualitative Studies

Part 2: Be careful what you ask for: Designing Tasks for Qualitative Studies

In Part 1, we discussed the fundamentals of good task writing that apply to any kind of usability study. Now let’s dial in on the right way to write tasks for qualitative studies.

When you undertake a qualitative study, you are really looking to unearth problems in the UI, and get inside the heads of representative users (folks that match the kinds of people who will ultimately be using the app or site you are working on).

If you’ve picked the right people, then the way they feel about the new site will be really similar to the way their peers will feel.  If you can remove all the friction points and make it intuitive for them,  you can be pretty confident your actual users will find it easy to use.

The primary goal of any qual study is to gain insights.

The primary goal of any qual study is to gain insights, not numbers.  Your findings reports will not contain bar charts, numbers and graphs.  Instead, they will contain captions, quotes, and descriptive paragraphs.

The things testers say in their side-comments, personal mutterings, and even the frequency and intensity of their swear words will be relevant to you.  In a recent study we did, we actually noted the number of deep sighs of one of our testers — as we quickly realized this was how he signaled his frustration with specific elements of the different sites we were taking him to.  (The sighs got deeper and more prolonged as he got more and more  frustrated by a particularly awful site.  This site was like the Rosetta stone for this session — it gave us his sigh-language for Full Fail.)

Because of that, your tasks will be deliberately crafted to allow a great deal of freedom of interpretation by your testers.  In qual studies, you aren’t timing their ability to do specific things in specific ways.  You are trying to understand how they think when they try to work with your new layouts, and find out what their challenges are, so you want to give them room to maneuver for themselves during the session.

Writing Good Qualitative Tasks 

Here are the hallmarks of a good qualitative task.

Use open-ended tasks

Don’t be afraid to leave a task open to interpretation. You’ll discover what they care most about, what doesn’t matter a bit to them, and even get insights into how they make decisions.   A task like this would be fatal in a quantitative study, but when you want insights, it’s golden.

Example:  Find a health plan that meets your needs

Writing it this way will let you see how they begin their search, and understand what exactly makes a health plan meet their needs.  Do they focus first on the cost?  Do they think first about size of their family, their age, or the choices they’ll have for their healthcare provider?  All of this matters, because it will influence the content strategy of the page.

Provide just enough detail to set the task scenario

We’ve learned that if you give too much detail in your scenario, it actually inhibits their freedom to explore and maneuver naturally.  They spend too much time referring back to the little scenario, instead of diving into the site and losing themselves in the discovery that will be so valuable to you.

If you have the right testers, they will not need much of a scenario because you can ask them to do things they would normally do.

Plus, we found that sometimes the scenarios were a poor match with their own industry or experience, and just felt “off” to them through the whole session.  Instead of helping frame a very natural exploration, it caused an awkward sense that they were just “play-acting” through the whole test.

If you have the right testers, they will not need much of a scenario because you can ask them to do things they would normally do.

Example:  A few months ago we were testing a new site navigation and needed to know if the security and access portfolio was labeled and grouped in a way that their buyers or users would expect.  We found a set of testers who are IT managers, team leads, or system admins currently working with security products.

No elaborate scenario was needed.  All we had to say was:

“Go find products that will help you do your job.” 

Don’t be afraid to change tasks mid-stream

You can be very flexible with the tasks in a qual study.  You may find that a different wording works better, or your third participant may do something surprising that causes you to add a new task to the test.  That is fine.  Since we are after insights, not numbers, the tasks are not being measured or timed.  So you don’t have to conduct each task exactly the same in order to keep all the variables constant.

Remember, qual research is looking for interesting observations, and if you need to shift some tasks for different testers to allow them to explore more freely, go for it.

We will often do the same 3-4 core tasks, and then add a custom task or two for each tester when it seems appropriate.  Some of our most surprising insights have come from these extra tasks at the end.  The testers seem to be more relaxed by then, and are probably more comfortable talking out loud while they work.  Whatever the reason, this works really well for us.

Invite them to email post-test insights

Once someone has engaged with your new layout for an hour, they may continue to have thoughts and insights about it after the session has wrapped up.  Your post-test interview will get them talking about their feedback, feelings, and thoughts, but some testers go on thinking about it for a few more days.

You can take advantage of that “afterglow” effect by encouraging them to share any additional thoughts.  We always send an Amazon gift card to each participant after they test for us, and that’s a good chance to express your gratitude for their valuable insights, and invite them to send you any other thoughts they might have about the things they experienced during the session.

Don’t bother with the subjective ratings at the end of the tasks 

Formal training for UX research often teaches us to have testers rate their experiences for you during the test.  For example, we did a competitive review for a client last year, and took testers to the sites of five of their top competitors and had them do a few tasks.

We had each tester rate their satisfaction with each site when they had finished doing the tasks on it.  What we ended up with was highly subjective, weak data that really told us nothing usable at all.  It also added a lot of time to the tests, which was a bad thing because we had to end some of the sessions when the testers were still willing to talk more about their insights.

Nielsen Norman is now advising that these ratings are not necessary, and we are glad to have our experience validated.

If someone loves or hates a site, you will hear it in their comments and see it in their behaviors.  No need to make them rate their satisfaction.


Next installment

In Part 3,  I will explain how to write good tasks for Quantitative studies.

By | 2018-02-25T10:25:29+00:00 February 16th, 2018|Categories: Usability Testing, User Experience|Tags: , |0 Comments

About the Author:

Susan is one of the original founders of WebWise Solutions, and is now the sole proprietor of the company. She and a partner created the company in 2001 to build and manage the end-user communities of several enterprise software companies, including Novell, Symantec, SUSE, Micro Focus, Dell, Omniture, Adobe, and many others. Susan went on to develop additional lines of business doing UX research and CRO consulting, web writing and editing, and content marketing strategy for many of their clients. She built, developed and managed a global team that delivered UX/CRO services to a number of sales and marketing teams at Micro Focus and SUSE.

Leave A Comment