Only the system can fail the test
Your product or solution is ready and it’s time to put it to the test. But how exactly do you build a user test? And what’s needed to ensure it is successful? Geoff Meads outlines the process behind user testing and how to know when it’s hit its mark.
In my last UX article we looked at the difference between Alpha and Beta tests and the importance of testing our systems with the right users. In this article we’re going to explore a methodology for a legitimate test of a system in its final stages of development or final implementation.
We’ll look at the ideal environment, materials, and people to involve plus discuss what to do with the results! Let’s dive in…
ADVERTISEMENT
Purpose
The purpose of a user test is to establish whether a system has met or failed to meet an agreed design specification. For this to be established we will need the system in question to be working and ready for test, but we will also need to understand the agreed specification. We’ll call this document a ‘Functional Specification’.
There are several ways of generating a functional specification, but the most useful way is to develop it with the end user themselves. During this process we will write down how the required system will behave from a user’s point of view. When developed in this way the resulting document should be written in the users own language, containing statements like “When in room A, pressing button B will make thing C happen”. It really should be that simple.
The functional specification has two additional uses. Firstly, it acts as an agreed standard for the integrator to test their systems during development. Secondly it gives the customer a formal set of agreed, understandable measures with which to decide if the integrator has delivered what they promised. In short, it’s a document that an integrator can use to prove they’ve completed a job and get paid!
The User
When testing, it is vital that the ‘user’ or test subject must not be part of the design or install teams. They might be the actual intended final user (the homeowner for example) or more often someone of similar intelligence and experience to the intended user.
They should not be afraid of ‘failing’ the test as they cannot fail! Only the system being tested can fail.
If possible, testing a system with multiple users (separately, not as a group) is even better and, if they could represent a spread of ages and experiences then even better. The more users who try the system, the more accurate the test results will be. However, more than four to six users will not tend to add much to the lessons already learned and might be a waste of time.
The Facilitator
This person will run the test alongside the user, asking the user to carry out certain tasks from the functional specification. Just like the user, the facilitator should not be one of the engineering team. The reason for this is that this group knows how the system being tested works and will find it extremely hard to resist helping or directing the user when then are unsure of what to do.
The only skill requirements for the facilitator are that they can help the user to run the test and answer any non-system related questions. The facilitator should ideally have no vested interest in the system passing or failing the test.
It bears repeating that the user cannot fail the test! Only the system can fail the test. This is not a test of the user’s intelligence, understanding or physical abilities. They just need to be themselves!
Materials
Now that we have the system being tested, a list of users to test the system, a facilitator, and the functional spec at hand, what else might we need to complete the test?
The first thing needed is a list of tasks that you wish the test subject to complete. These should be taken directly from the functional specification. One example might be: ‘please attempt to turn on the TV and select the Disney channel’.
You will also need equipment that can capture the actions and comments of the test ‘user’. It’s imperative that both successes and failures are captured in all their glory.
One possibility here is a video camera / camera phone on a tripod pointing at the system being tested. This should be able to capture the test facilitator’s instructions, the test users’ actions, any feedback from the system itself and any user comments as they attempt to complete each task.
Environment
The idea here is to make the person testing the system comfortable while simulating as closely as possible the actual environment the system will be used in. This will help capture the most accurate information and avoid any unfound failures.
With that in mind, the room and environment should be as close as possible to the real room the system will be used in. It should be at a representative temperature and the environment should be as physically representative of the intended environment of use.
For example, if the system is intended to be used outside in all weathers (e.g. an front door intercom) then the system should be tested outside if possible. Influences like appropriate clothing (e.g. gloves), weathers and temperatures should be simulated if possible, to fully test for real-world conditions.
Results, Conclusions & Actions
Once the tests are complete, it’s time to compile the results, discuss outcomes and make any needed changes to the system to ensure it meets the functional spec and is easy to use.
There are usually two phases to compiling and evaluating the data. The first is a simple analysis of the pass and fail status of each task. For example, if four people tried to complete a task and three people failed then clearly the system needs adjustment!
The second phase involves diving into each test, watching back the test videos, to see what went right and what went wrong. Perhaps buttons are not easy to find, not clear in their purpose or simply not programmed yet!
It is also possible that some findings will highlight unforeseen improvements that could be made. For example, simpler ways of doing things can present themselves as can redundant functions that could be removed to keep things simple.
Rarely, if ever, will a well-conducted user test not reveal previously unseen issues within designs, installations, or programming. While user tests do take some time and effort, their benefits almost always pay back in spades. Furthermore, lessons learned on one user test can, if captured and acted on, easily improve the effectiveness of future designs.
-
ADVERTISEMENT
-
ADVERTISEMENT
-
ADVERTISEMENT
-
ADVERTISEMENT