Monday, June 6, 2011

Usability Testing - How many users should you test?

One of the last presentation sessions I attended at Intranets 2011 was by Step Two's Rebecca Rogers @rebeccarodgers titled "How do you find out what staff need?".

Rebecca is a specialist at performing user needs analysis for organisations so I was keen to attend her presentation and I was definately not dissapointed. When Rebecca came to discussing Usability Testing I raised the question of how many users should be tested and referred to Jacob Nielsen's well known Alertbox article from 2000 Why you only need to test with 5 users.

The discussion that followed raised many differing opinions. Some were of the view that the research performed by Nielson & Landauer was specific to Internet Web sites & not for Intranets. Others maintain that 5 users was definitely not enough and that testing 20 users minimum was more appropriate.

My experience tends to agree with the research (now nearly 20 years old) that recommends testing 5 users at a time provides the best insight into an Internet or Intranet sites usability problems.

The Nielson posting explains the reasons why 5 users is enough:

"As soon as you collect data from a single test user, your insights shoot up and you have already learned almost a third of all there is to know about the usability of the design. The difference between zero and even a little bit of data is astounding.

When you test the second user, you will discover that this person does some of the same things as the first user, so there is some overlap in what you learn. People are definitely different, so there will also be something new that the second user does that you did not observe with the first user. So the second user adds some amount of new insight, but not nearly as much as the first user did.

The third user will do many things that you already observed with the first user or with the second user and even some things that you have already seen twice. Plus, of course, the third user will generate a small amount of new data, even if not as much as the first and the second user did.

As you add more and more users, you learn less and less because you will keep seeing the same things again and again."

This does not mean that you stop at 5 users. It merely means that you perform testing with 5 users. Attempt to fix all the issues arising from these user tests, then test again with the next group of 5.

I have found that using this method of testing, fixing and testing again makes the most of everyone's time and resources. My experience in using this method has demonstrated that by the time you have tested 4 groups of 5 users you will have identified around 90% of usability problems.

Of course careful consideration needs to be given to selecting which users to test. It is best to choose from a wide variety of employees, each with varying degrees of computer skills and diverse roles in the organisation. You need to try hard to ensure that the broadest range of employees as possible participate from the Executive level to those in call centres, outlets and on the road. This is also highlighted as an important consideration in the research.

"If, for example, you have a site that will be used by both children and parents, then the two groups of users will have sufficiently different behavior that it becomes necessary to test with people from both groups. The same would be true for a system aimed at connecting purchasing agents with sales staff."

I have always found usability testing to be fun and enlightening and while not everyone would share that view it is still an essential exercise that all Intranet Managers should carry out.


Testing User said...

usability testing is the great way to become a site feature so user friendly that helps user to return your site again and again.

Great information shared.


Jimmy Jarred said...

It is necessary to test a website according to user because at last they have to make use of it. The reasons given above are convincing. But I wish to learn more about this process.
web usability evaluation