Help – my guerrilla usability testing participant isn’t in my target audience

On Thursday I had another awesome day running my guerrilla usability testing workshop, this time with the excellent digital team at Comic Relief.

As I mentioned in my previous post about my NorfolkDev workshop, whenever I speak about or teach guerrilla usability testing there are always some really insightful questions about some of the harder aspects of this methodology. Over the next feel weeks I’ll be writing up responses to some of them here.

In my previous post I answered the question “I have a task scenario that requires someone to be logged in, what should I do?”

In this post I’m going to answer a question that came up again with the Comic Relief team; what happens if I discover the participant I’ve just approached isn’t in my target audience?

Contextual questions

This is a question that usually gets raised when we’re talking about contextual, or screening, questions. In a lab usability study you pre-screen participants to ensure you only have actual users or potential users of your website or app in your study.

In a guerrilla usability study, because you’re approaching people on the street, you can’t be sure that your participant is the kind of person who would really use your website, app or service.

This is why we ask a few contextual questions about our participant before we get started; to understand whether or not they’re part of our target audience, and if we have multiple types of users, which segment they might most closely align to. This allows us to contextualise their feedback to a particular user persona or type, so we aren’t assuming that all of our users are having the same experience or issues.

Here are some example contextual questions:

  • “What is your residential status?”
  • “Are you employed?”
  • “What websites or apps do you use regularly?”

However, sometimes while asking these questions, we discover that this participant isn’t in our target audience at all. For example, you might ideally be looking for people who own their own property, have a full time job, and are regular Facebook users. But when you ask your contextual questions, you discover your participant is renting a small bedsit, volunteering at their local charity shop, and has only logged into Facebook once or twice.

So, what do you do?

If you choose to do this, keep your consent forms in mind. If the form says “we’re going to ask you to look at a website” then you should explain the reasons for the change, and assuming the participant is still happy to proceed with a different type of session; amend the form and ask the participant to put their initials against that change.

  1. Continue the session

    Unlike in a lab usability study, where we might need to spend an hour or more with each participant, the impact of us continuing the session regardless is very low. At most, we’ll lose 15 minutes out of our day.

    If a participant has been kind enough to offer you 15 minutes of their time, it can be disrespectful to dismiss them. For this reason, continuing the session is always my preferred option.

    However, if you do continue with the session, then you’ll have gathered some data as a result of it. What do you do with that data?

    Your decision here should depend on exactly what the aim of your study is.

    If you’re purely concerned with the usability of a website; for example, legibility of text or ease of use of interactive elements, then you might choose to include the findings. The exception to this might be if none of your other study participants who did fit your target audience had the same issues.

    In this case it could be wise to understand why that was, with further research. Perhaps your interface so closely modelled to Facebook’s that regular Facebook users are comfortable with it, but others might not be? That might not be an issue for your website or app now, but if Facebook changes their interface you expand your target audience, it might be an issue later.

    However, if the aim of your study is broader than basic usability issues and you’re interested in things like whether or not you’re using the right language, whether the products displayed on landing pages are engaging, then you might choose to exclude the findings from this session. These study aims are specific to your target audience; so including feedback from someone who isn’t likely to be user doesn’t make sense.

  2. Change the session type

    One way to continue the session without collecting data that may not align with your study needs is to change the nature of the session. The simplest way to do this is to have a back-up set of interview questions that you can ask the participant instead.

    Whether this is a useful tactic will depend on who your users are, and how focused your current study is. If your current study is focused on full-time employed Facebook users who own their properties, but you have another group of users with a different set of criteria and needs, then there may be aspects of this participants interests or experience that you can gather relevant data on.

    For example, you might delve more into your participants use of social media, and the reasons they haven’t particularly engaged with Facebook.

  3. End the session

    Alternatively, you can choose to end the session or cut it short. There are two scenarios where you might want to do this:

    • You have a project stakeholder with you, they aren’t familiar with user research, and they might struggle to put the findings from this session to one side
    • The venue you’re in has a closing time and you can’t return to it again easily (such as a conference), and you might lose out on other participants

    If you do decide to take this option, you’ll need to explain the reasons for cutting the session short and apologise for disturbing them. If you promised your participant an incentive at the beginning of the session it’s important that you still give it to them, and thank them for any time they did spend with you.

    The risk here is that, even though the participant has their incentive in hand, they may feel like their opinions have been rejected. There isn’t an easy way to mitigate this risk, which is why I prefer not to take this option.

Also in this series:

Next up I’ll be answering:

  • What happens if my participant isn’t used to using the device I’m testing with?
  • How do I deal screening participants for difficult topics, like disability or income?

If your team or organisation is interested in learning more about guerrilla usability testing, then I’d love to come and run a workshop with you. For my full-day workshops, I’ll be stepping you through how to prepare a study in the morning, and taking you out to try the newly designed study sessions on real participants in the afternoon. Drop me an email if you’d like to hear more.