user_experience_design

Few well-known facts about user testing:

1. Show me, don’t tell me
To design the best UX, pay attention to what users do, not what they say. Self-reported claims are unreliable, as are user speculations about future behavior. Users do not know what they want.
source: https://www.nngroup.com/articles/first-rule-of-usability-dont-listen-to-users/

2. Testing 5 users is enough
Elaborate usability tests are a waste of resources. The best results come from testing no more than 5 users and running as many small tests as you can afford.
source: https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/

3. Context of use is crucial
Emphasize and leverage each channel’s unique strengths to create usable and helpful context-specific experiences.
source: https://www.nngroup.com/articles/context-specific-cross-channel/

4. Use right test & research technique

Taking an example of remote usability testing based on fully working prototype conclusion is:

Consider using both broad tasks and specific tasks in your tests.

The key to a successful study is to ask users to perform tasks followed up by questions that will give you the type of insights you need. Once you’ve clearly defined your test objectives, you’ll be able to decide whether to use tasks and questions that are either open-ended or specific.

Open-ended tasks and questions help you learn how your users think. They can be useful when considering branding, content, and layouts, or any of the “intangibles” of the user experience. They’re also good for observing natural user behavior.

Specific tasks and questions can help you pinpoint where users get confused or frustrated trying to do something specific on your site or app. They’re great for getting users to focus on a particular feature, tool, or portion of the product they might not otherwise interact with.

When to Use Open-Ended Tasks and Questions

Find Areas of Interest – When you’re not sure where to focus your test, try this: run a test using open-ended tasks and questions. You’ll be sure to find areas of interest to study in a more targeted follow-up test.
Exploratory Research – If you’re doing exploratory research, open-ended tasks and questions can help you figure out how people are using your product and the kinds of problems they’re running into.
Identify Usability Issues – If you want to find things that are broken or cause friction for your users, letting them explore freely will uncover issues you may not already be aware of.
Pitfalls to Watch Out For

Clearly Define Your Test Objective – When you’re asking open-ended tasks and questions, you still need to make sure you have a clear objective in mind. (For example, “Can visitors find the product they’re looking for?”) If you don’t know what you want to learn, your test participants may wander around aimlessly without uncovering anything useful. Make sure that your tasks and questions support the ultimate goal of your research.
Keep Users Talking – Make sure you keep the users talking while they’re performing open-ended tasks. You don’t want them to forget to speak their thoughts aloud as they explore feedly, so remind them to explain why they’re doing what they’re doing.

When to Use Specific Tasks and Questions

Test Specific Features – Give test participants specific instructions if you want to test the usability of a certain feature of your product. For example, “Please use the search bar to find a pair of men’s black dress shoes in size 11.”
Complex Products – If you have a complicated, non-traditional, or unusual product that people won’t automatically know how to use, specific tasks will guide them through it and explain the context.
Conversion Optimization – If you know there’s a specific point in your conversion funnel where people are bouncing, use specific tasks and questions to watch them go through the funnel. This will give you the context and insights to understand why they’re bouncing.
Pitfalls to Watch Out For

Avoid Giving Exact Instructions – Even if you’re giving your test participants specific tasks and questions, you need to keep a balance. You don’t want to tell them every single thing to do, because then you won’t learn anything. Let them do some of the work on their own, and try not to hand-hold them too much.
Avoid Leading Questions – Try not to bias your questions by suggesting a specific response. For example, “How easy was it to find the pricing page?” This is such a common trap, and even the most experienced researchers fall into it. Make sure all your questions are as objective as possible. A good way to reframe that questions could be, “How easy or difficult was it to find the pricing page?”

source:

https://www.usertesting.com/blog/2015/05/18/open-ended-vs-specific-tasks-and-questions/

https://www.usertesting.com/blog/2011/10/24/tips-for-top-test-results/

4. Halo effect, leading questions and bias.
Ask Me No Leading Questions, and I’ll Tell You No Lies
You may already anticipate some issues users will encounter. Look at examples that leads this: “Was it hard to find the Preferences page?” “How much better is the new version than the original home page?” A more neutral approach produces fairer results: “Compare the new version of the home page to the original. Which do you prefer?”
Participants often have a bias toward positive feedback, because they don’t want to hurt the moderator’s feelings (halo effect). Remote unmoderated user testing has less bias, but you should still be aware of that potential.
source: http://researchaccess.com/2013/07/leading-questions/

https://www.usertesting.com/blog/2011/10/24/tips-for-top-test-results/

5. Consistency matters
A consistent user experience (not necessary only UI), regardless of platform (web, email, mobile devices, kiosks, online chat, and by visiting physical locations), is one of the 4 key elements of a usable omnichannel experience. Consistency across channels helps build trust with customers. this doesn’t mean for example coding iOS type of app UI for Android platform etc

https://www.nngroup.com/articles/cross-channel-consistency/

6. Use right metrics – user testing results interpretation might be different in people with different mindsets/objectives
My favourites, depending on context, are: task completion, time to complete, taps to complete, critical incident analysis and emotional signature coming from open type of test, whenever it is possible.
source:

https://www.nngroup.com/articles/usability-metrics/

http://www.measuringu.com/blog/essential-metrics.php

SHARE THIS