Part 1: Be careful what you ask for: designing the right tasks for usability tests

Keep Exploring :/, User Experience/Part 1: Be careful what you ask for: designing the right tasks for usability tests

Part 1: Be careful what you ask for: designing the right tasks for usability tests

Failure for a UX researcher goes like this:  You test the usability of a client’s app or website, make discoveries, which lead to recommendations, which cause them to change things (or NOT change things) and then…  when they launch it into the real world, it ends up sucking in some unexpectedly awful ways for their real users.

That is the nightmare we all dread.  That is the scenario that shatters your confidence in your skills, your methods, and your team.

 The better we are at writing the tasks for the tests we conduct with in-person and remote testers, the more we will be able to trust what we discover.

It all comes down to trust, with an applied science like UX research.  You have to trust your findings enough to lean on them to formulate hypotheses, which you then must test (and trust) all the way to the end.

And then… the client has to trust you enough to act upon your advice.  Yeah, that.

So let’s take a good look at one of the most fundamental elements of any usability test.  The tasks.   The better we are at writing the tasks for the tests we conduct with in-person and remote testers, the more we will be able to trust what we discover.

Why does it matter?

All usability tests will ask participants to do tasks. But the kinds of tasks you give them, and the way you phrase the tasks, will have a profound effect on the reliability of the results you will get.  Poorly worded tasks can inadvertently guide your testers through a UI that will not work for normal users.  Unrealistic tasks can produce meaningless results that unnecessarily worry you, or reassure you.

Here are some tips to help you write the kinds of questions that will get you results you can trust.

Task-writing Fundamentals

Here are some guidelines for writing good tasks for any kind of usability study.  No matter whether you are doing a quantitative or qualitative test, follow these foundational guidelines.

 Look at what real users actually need to do.  

Make the tasks realistic, and you’ll get excellent and relevant data.  For one thing, it will also take less explaining on your part, if you ask them to do something that they do all the time (on the current site, or on competitors’ sites).

If you ask them to do something they wouldn’t do in real life, they’ll behave oddly/unnaturally, and won’t exercise the UI in the ways that will reveal its problems.

Throughout the session they’ll be more comfortable, and they’ll interact with the app or website with natural confidence.  If you ask them to do something they wouldn’t do in real life, they’ll behave oddly/unnaturally, and won’t exercise the UI in the ways that will reveal its problems.

If you need help with this, your clients should be able to tell you what the top tasks are for the app or pages.  Or you can survey the users, or use web analytics data to discover what the most popular paths are through the site.

Don’t accidentally leave interface clues.   

Watch out for any clues you might have provided in your task descriptions.  It’s hard to avoid using them when you write,  but you need to rip them out.  Make all references as generic as possible.  Don’t say “Click the SUSE Shop button”, but rather something like “Find a way to purchase a product online.”

If you tell someone to click on About Us to register for the upcoming events, you’ll never know where they would have looked for that without your hint.

Don’t list all the steps they’ll have to take to do something. Make them figure it out.  That will reveal a lot about the efficacy of your content, labels and link placements.

And don’t use any of the words used on your menu when describing tasks.  If you tell someone to click on About Us to register for the upcoming events, you’ll never know where they would have looked for that without your hint.  (Even if you are testing the event registration app, make them find it from the home page.)  Clues and hints bias the testers’ behavior and make the results less reliable.

Keep tasks emotionally neutral.  

This may seem silly, but it will matter to some testers.  Don’t assume anything about their personal lives in the way the tasks are phrased.  It’s safer to write tasks like “Find a way to enhance and share a photo with a friend” vs your spouse, mother, etc.

Don’t skip the pilot test

You will think you don’t have time for this, but I promise you it will pay off.  Pilot-test your test with one or two representative users (NOT people from your team who owe you a favor).  Use two of your real testers for this, and just plan on throwing away their results.  It will be worth it.  You will find things that are awkward when you really explain them to real people.  You may discover that it’s smarter to rearrange the task order to make the test flow better.  You may learn there’s a better way to ask something.  You may even find tasks that are redundant that you can revise to be more effective.

Pilot-test your test with one or two representative users (NOT people from your team who owe you a favor).  Use two of your real testers for this, and just plan on throwing away their results.  It will be worth it.

This is especially vital when you will be doing a quantitative study, since you can’t make any changes to the test once you have started.  But it’s a wise practice across the board, and makes your tests a lot cleaner.  (It will also make you look a lot smarter during the sessions, if that is of any interest to you.)

Call them “activities” rather than “tasks”

I know we’re calling them tasks in this article, but whenever you list them or talk about them to your participants, try to call them “activities.”   We’ve found that people are already pretty nervous about being tested, and a nervous tester won’t behave naturally.  So anything you can do to reassure them that you are testing the site or the app and not THEM, is a good thing.

The word “tasks” seems to trigger the opposite feeling — and makes them think you’re assigning them tasks that will test their abilities and intelligence.  The word “activities” is more neutral and sounds like we’re just having them play around out there for us, as if we are checking out new playground equipment.  (Which is actually the case when you think about it.)

Make all the tasks action-oriented

Whenever possible, make sure all of the tasks involve having the person actually DO something you can watch/record — make them click, drag, expand, fill-in forms, etc.  Don’t just ask them to explain how they would do something.  People are terrible at accurately reporting how they would do something.  We have very little self-awareness when it comes to the actions we take so effortlessly online.

Would you trust a car if all the test drivers just sat inside a 3D model of it and described how they would use the features?

It’s much better to ask them to  reserve a room, or book a flight, or buy a pair of shoes than to get them to a certain point in the site and then ask them to tell you how they would do it from there.

I know, a lot of times we are forced to test with clickable Invision or Axure wireframes.  That’s where I learned this principle the hard way.  Every time I have had to test with an artifact that isn’t fully clickable (ie, only some of the links really work) we have had major limitations in what we could learn.

It would be like testing a new car without being able to actually drive it.  No bueno.  Would you trust a car if all the test drivers just sat inside a 3D model of it and described how they would use the features?

 


Next Installment

In Part 2:  I’ll explain how to write good tasks for Qualitative studies. 

By | 2018-02-25T10:26:35+00:00 February 12th, 2018|Categories: Usability Testing, User Experience|Tags: , |0 Comments

About the Author:

Susan is one of the original founders of WebWise Solutions, and is now the sole proprietor of the company. She and a partner created the company in 2001 to build and manage the end-user communities of several enterprise software companies, including Novell, Symantec, SUSE, Micro Focus, Dell, Omniture, Adobe, and many others. Susan went on to develop additional lines of business doing UX research and CRO consulting, web writing and editing, and content marketing strategy for many of their clients. She built, developed and managed a global team that delivered UX/CRO services to a number of sales and marketing teams at Micro Focus and SUSE.

Leave A Comment