The SpoolCast with Jared Spool

The SpoolCast has been bringing UX learning to designers’ ears around the world since 2005. Dozens and dozens of hours of Jared Spool interviewing some of the greatest minds in design are available for you to explore.

Episode #59 Follow-up to Conducting Usability Tests in the Wild

November 21, 2008  ·  30 minutes

Listen Now

Download the MP3

Back in October we had the good fortune to host Dana Chisnell's popular Virtual Seminar entitled "The Quick, the Cheap, and the Insightful: Conducting Usability Tests in the Wild", where she told us you don't have to run usability tests by the book to get great value out of them. Quite a statement considering she (co-)wrote the book!

As usual, we received many more excellent questions that we could deal with during the seminar, so we recorded this podcast.

Show Notes

Back in October we had the good fortune to host Dana Chisnell's popular Virtual Seminar, The Quick, the Cheap, and the Insightful: Conducting Usability Tests in the Wild, where she told us you don't have to run usability tests by the book to get great value out of them. Quite a statement considering she co-wrote the book: The Handbook of Usability Testing, Second Edition.

As happens frequently, seminar viewers sent in more excellent questions than we could answer during the session, so we sat down with Dana afterwards for a quick follow-up.

In the interview, Dana gave me great answers to these viewer questions:

  • Is there a middle ground between "classic" testing and "quick and dirty" techniques?
  • How many people do you need in these "wild" tests to create enough valuable data?
  • How should you screen subjects?
  • Should designers observe "wild" tests?
  • How do you answer critics who claim quick and dirty testing is not scientific?
  • What ethical issues are there with recording test subjects?
  • Once you get this quick data, what are the next steps?

During the podcast, Dana & I talked about ways to analyze results and we mentioned the KJ Technique. This is a great way to get a team on the same page about the top priorities that emerge from testing. You can find more about the technique in this article.