Or, 5 ways to avoid self-sabotaging your next user research project
There are several unwritten rules in life. Don’t run with scissors. Don’t drink unidentified liquids. Don’t put your life savings in investments that seem too good to be true.
The same goes for user research. Don’t interrogate your user as if he’s a criminal suspect. Don’t shoot questions with the ferocity of a machine gun. Don’t confuse your user unnecessarily.
Not only is it painful to witness (or be party to) such transgressions, such malpractices can also skew research results and consequently cause your product to go off the rails.
Here are some lessons my team and I have learnt from our various projects to date.
#1: Keep it simple, stupid
(Or, Thou Shalt Not Confuse Thy User)
Test only what needs to be tested. Do not be superfluous. Run mock interviews beforehand to identify and rectify areas that cause the user confusion.
Remember: A confused tester will not give you accurate or reliable results. His subconscious mind is focused on one thing: how to get out of this situation as quickly as possible. Think about the time you were cornered by a well-meaning but pesky aunt during an extended family gathering. Yes — that time. You’d do anything, say anything to cut short that encounter. And so would your tester, to get out of this situation.
Tip: Agree with your team what you want to be tested, and keep that list short. Otherwise, you risk obtaining results that are the product of your interviewee’s annoyance and confusion and are not indicative of real-world behaviour.
#2: Don’t test 10,000 prototypes when 2 will do
(Or, Thou Shalt Not Test for the Sake of Testing)
A subset of #1.
You do not have to test Version A (with the green buttons) against Version B (exactly the same, except with yellow buttons) vs Version C (same as Version A, except with shadows) vs Version D (same as Version B, except with shadows) vs Version E (exactly the same as Version D except that the buttons are one line instead of half a line above the Call To Action Buttons)….
Feel confused just by reading all that? Irritated? Didn’t bother reading through every word but just jumped to this paragraph? Good, because that’s how your interviewee feels, and chances are, he will pick one option randomly just to put the interview to an end.
Tip: Keep the number of items to be covered in the user interview or usability test to a minimum. Always aim to delve deeply, not skim superficially.
#3: Give your user breathing space
(Or, Thou Shalt Not Interrogate Thy User)
I often made this mistake myself when I first started running interviews. I didn’t realise it until I reviewed the interview recordings and saw how fast I was throwing questions at the interviewee. By asking questions incessantly, you risk making the user feel as though he is being interrogated. This is usually not a good state of mind to be in.
While you may get a false sense of accomplishment from doing this (“I’m asking so many questions! I’m getting so many answers!”), you lose the chance to obtain deeper insights that can surface when the interviewee has some time to think.
Tip: Take a breath, exhale, and take another breath before you go on to the next question, particularly if you’d just asked a “Why” question. A little breathing space may be all you need to transfer a dull and predictable interview into a “Eureka!” moment.
#4: Understand the methodology of your research
(Or, Thou Shalt Appreciate Qualitative Research)
There are two main types of research: Quantitative and Qualitative.
Quantitative (“Quantity”) research requires a large sample size, usually in the hundreds or thousands, in order to arrive at meaningful statistical findings. Its goal is to find out the hard data — how much, how many %, etc.
Qualitative (“Quality”) research uses a much smaller sample size and focuses on unearthing the “whys” behind the “whats”. If you are testing a prototype, most UX experts agree that interviewing 5 users will reveal 85% of problems in a usability test. Personally, I’m more happy with a sample size of 10–15 — but if that number of participants is not available, I’m willing to settle for 5–7.
Research interviews are usually qualitative studies: the focus is on asking users why they behave the way they do, teasing out their answers through a mixture of psychological knowledge, observation and carefully-worded questions. We will be able to tell you that 13 out of 15 interviewees gave similar feedback; we won’t be able to say that 86.6% of users feel the same way.
But guess what? Giving the client a percentage figure is not the goal of qualitative research — telling them the “whys” behind users’ behaviour is.
Tip: Educate stakeholders on the objectives of qualitative research and how such research is carried out. Help them understand that it’s about the quality of the answers, not the quantity of respondents.
#5: Stick to the agreed goals and objectives
(Or, Thou Shalt Not Change Thy Goalposts)
In one project, we were tasked to test a redesigned homepage and article template for clarity of its messaging of the site’s new model. After the goals and methodology were agreed on, the client then decided to test additional homepage templates that were entirely identical — except for the colour of the logo. The single article template later multiplied into three variations. The final addition came about one day before the start of the user interviews.
These changes to the agreed scope of work forced us to change our discussion guide and interview structure. Rather than testing the important things (messaging and clarity), the research study became an exercise in choosing one’s preferred colour and design. This deprived us of the necessary time we needed to delve deeper into real reason of the study: whether or not the revamped template effectively communicated the website’s new subscription model.
While we managed to get valuable results in the end, it was a needlessly painful and frustrating process that hindered our ability to obtain even better insights.
Tip: Set clear boundaries around the scope and execution of the user interviews and usability tests. Inform stakeholders that if they insist on breaking these boundaries, the findings of the research may be compromised.
Anyone can do user research, but to do it well is another matter altogether.