Usability (or user) testing is a key part of the iterative design process. Testing digital products with real users will give you the best chance to meet their needs and achieve return on investment (ROI) for the company. But, you can only gain useful insights if the test is carefully planned and run. We test throughout our user-centred design process, and I’d like to share tips to help you make the most out of your usability testing tools.

What is usability testing?

Over the years, we have run usability studies on countless websites and mobile apps. Recently we’ve even branched out to TV apps and physical letters! We test throughout production, from paper prototypes to wireframes, and user acceptance testing (UAT) pre-launch.

It doesn’t need to be time consuming, or even particularly expensive. Typically, 80% of usability issues are found by the first five participants, running regular tests is a far better way to determine whether the product meets user needs. Recent research highlights that every $1 spent on solving a problem in the design phase would cost $10 if carried to the development stage, and $100 or more if not resolved until after launch.

However, testing sessions must be carefully planned to gain true insights. Here are some tips to help you achieve high quality usability testing tools.

Define the Purpose of Your Test

Creating a clear usability test plan question is the first step towards testing sessions.

A test can be either explorative or goal-oriented. If the main goal is to assess the overall product, the test is explorative. Here, you need a broad research question e.g. ‘how is the product functioning’, without forming hypotheses about specific functionalities. For instance, we aimed to improve the balance notification letters sent by dlc, the UK’s leading debt recovery company. The project began by exploring the original letters, to uncover the positives and negatives.

In contrast, goal-orientated approaches assess a specific feature or user interface (UI) element; they should be the focus of the research question, and begin with a hypothesis. For instance, we built a portal for a company managing automated parking systems. Usability testing was focused on specific interactions; research questions explored the length of the form, and whether key interactions (like alerts) were clear.

Set the Tasks for Participants

Translate the research questions into tasks for the user, or the actions you will ask them to perform. Here, pick a typical user journey or their ‘natural’ actions; these goals will help you to answer your research question.

For instance, our usability study for Ted Baker aimed to establish whether the checkout process was over-complicated. So, we asked users to purchase a certain item, observing the checkout process. Be careful to establish the criteria to define success or failure, and the metrics to measure. Here, ‘success’ meant checkout completion within a defined time period; ‘failure’ was abandonment, or missing the time limit.

However, there are other important metrics. By measuring the length of time taken to move to the next stage, we can see if people are ‘stuck’; eye tracking systems can even reveal the areas they focus on. Record any verbal comments made during tasks for further insight. Always define your metrics in advance – or you’ll risk being overwhelmed by data, with no clear starting point.

Heatmap example of a usability test

Plan the tasks and the metrics to measure before analysing sitemaps of your users’ behaviour.

Run Pilot Tests

Run pilot tests using your usability testing tools before inviting ‘real’ users to undertake task: colleagues can be great participants, giving valuable feedback on whether tasks set are clear.

Pilot tests don’t take long but guarantee higher quality results. They have a triple advantage:

    1. Ensure the instructions and tasks set are clear, highlighting if something important has been missed.
    2. Increase confidence when beginning the ‘real’ test with users, by incorporating feedback from a pilot in a more relaxed situation.
    3. Test any equipment in advance, instead of encountering hardware or software issues on the day of the test.

Pick the Right Participants & Environment

Products are designed to maximise the user experience (UX) for their specific audience. So, you must test with the right participants, in particular environments – but there are a few options.

Guerrilla testing is fairly informal and normally run by stopping participants and asking them to complete tasks ‘in the wild’: a library, shopping centre, museum, etc. A more natural environment helps participants act spontaneously; it can even be the best way to recruit the right audience. For example, we headed to the local gym, abound with the best target audience to evaluate a new responsive website for MaxiNutrition.

However, this is not always the best approach. It can be tricky to find participants that match the product’s intended audience. In this case, we can ask clients to provide these, especially if it’s an internal tool. Another option is working with professional recruitment partners – especially for precise, targeted requirements. It may not always be possible to move the product around (e.g. if testing a TV app), or the equipment needed (such as eye tracking machines). These kind of projects suit a more formal testing environment such as a usability lab or the office.

Make Participants Feel Comfortable

Whichever environment is chosen, encouraging your participants to behave as naturally as possible is the best way to gain reliable results. This is crucial when running your test in an ‘unnatural’ environment, like a lab; these places can feel intimidating, so users may not behave as they normally would.

Here, I’d advise to:

    1. Avoid wearing overformal clothes, and be approachable – smile!
    2. Spend a couple of minutes at the beginning to greet the participant. Introduce yourself, explain the nature of the test and assure them that data is confidential.
    3. Start with an easy task, and explain that you’re assessing how well the product works for them – not their performance. This differentiation is key.

Sofia and Cristina conducting a usability test

If participants feel uncomfortable, they are probably not going to behave as they would in a ‘normal’ setting.

Keep It Neutral

Usability tests can be difficult to run – they are essentially putting the spotlight on how well your design works! It’s natural to feel protective and unconsciously lead participants to validate your decisions. But, of course, this will prevent you from gaining accurate, actionable feedback on the design.

Tips to avoid this include:

    1. Present tasks and ask questions in a neutral way, rather than implying an answer. For example, if you’re testing whether it is difficult to complete a certain task, don’t ask, “what problems did you find?” but instead say “did you experience any issues?”.
    2. Don’t ask users how they would approach a task directly – ask them to act. For instance, if you want to know if people would use the buttons to share an article, ask them to share the article. Here, you’ll discover if they would use the buttons, or share in another way, such pasting the link into social media channels.

Testers often start sessions with certain expectations and hypotheses – but you must be open-minded and objective when you analyse the results. A good designer is one who is flexible enough to accept the weaknesses of their own design and aims to constantly improve.


The only way to achieve useful insights that will make a real difference to your design is through careful usability test plans, following the ideas I’ve outlined here. Otherwise, you risk recording distorted results, with no actionable improvements – or even worse, making amendments that create a detrimental user experience.

If you would like to run usability testing sessions, Cyber-Duck can help you with one-off consulting, or conduct high-quality testing on your behalf. Please get in touch to find out more.