The SpoolCast with Jared Spool

The SpoolCast has been bringing UX learning to designers’ ears around the world since 2005. Dozens and dozens of hours of Jared Spool interviewing some of the greatest minds in design are available for you to explore.

Episode #169 Caroline Jarrett - Designing Effective Surveys

April 6, 2012  ·  30 minutes

Listen Now

Download the MP3

Getting data from your users is a fundamental part of creating great user experiences. Surveys are a great way to get feedback and learn about your users. The problem is everyone has sat through a painful, monotonous survey that asked a series of frustrating and seemingly pointless questions. As with anything in UX, if your users sense they’re in for a painful experience they simply won’t engage with your survey.

Show Notes

Caroline Jarrett, author of Forms That Work: Designing Web Forms for Usability and the upcoming Rosenfeld Media book, Surveys That Work, has been writing and presenting on survey design since 2002. In her Next Step Series Virtual Seminar, Designing Effective Surveys, Caroline offers tips on how to entice people to take the survey and keep them engaged to ensure you get accurate answers from them. During the seminar, we ran short on time to answer all of the audience’s questions, so Caroline joins Adam Churchill to tackle the remaining ones for this podcast.

Tune in to the podcast to hear Caroline answer these questions:

Full Transcript

Adam Churchill: Welcome everyone, to another episode of the SpoolCast. Last week Caroline Jarrett presented a Next Step Virtual Seminar, Designing Effect Surveys. The Next Step series of seminars are being created with help from the folks at Rosenfeld Media. Why? Well in this case, much of the thinking from the seminar comes from Caroline's book on survey design, being published later this year by Rosenfeld Media.

This seminar with Caroline has been added to our UIE User Experience Training Library, which has over 85 recorded seminars from wonderful topic experts just like Caroline Jarrett, giving you the tips and techniques you need to create great design. Welcome back, Caroline.
Caroline Jarrett: Oh, I'm delighted to be here, Adam, it's great.
Adam: For those that weren't with us in the seminar, could you give us a brief overview of what we covered?
Caroline: Yes, I was talking first of all about how surveys are something that we don't tend to turn to first as our main option in user experience. They tend to be something where someone comes along and says, "I'm going to do a survey anyway." But the more I've been working with surveys and researching them and finding out about them in general, the more enthusiastic I've become. I think they're a valuable tool that we can have in our toolbox alongside other things.

And so in the seminar, I was thinking about if we're going to do a survey, how can we actually make it a really good one? And so I presented some tips on how to entice people to take a survey, and then once they're in the survey, how to get them to engage with the questions they've got, and to answer accurately, because you don't just want data, you want good quality data. We finished off with a few tips around: now you've got the data, how do you actually get some good stuff out of that data for your stakeholders.
Adam: Good, let's get some of the questions that were left over or that we maybe felt were worth readdressing. There were lots of folk in the audience that were looking for your recommendations and thoughts on tools that are available to help design more effective surveys. Can you offer up your thoughts there?
Caroline: That really amused me, because after the seminar, I had a look at the Twitter stream. One of the people who asked that question had actually laid a bet with Twitter that I wouldn't actually answer it. So I guess he owes me a beer now.

It's a good question, and the answer I gave, and I think was the right one to give, is that my favorite survey tool is whichever one my client wants me to use, and that can be a whole range of different things. The high-end market research tools tend to be very, very powerful and can do almost anything, but they're expensive, and they have a very brutal learning curve. In some ways, because you can do anything in them, there's a temptation to do anything in them. Some of the surveys can end up becoming pretty complicated and unwieldy, which isn't a great user experience sometimes. But if you really need to do a complicated cross-tabulation, then maybe something like Confirmit or one of the really high-end tools is what you need.

Then right at the other end, I've had to work with Word on many occasions. You know? Let's get those questions coming out, around the organization in Word, and then maybe even present the survey in Word. As we all know, Word isn't designed as a question and answering tool. There's lots of things you can't do very easily in Word either.

Right in the middle are things like SurveyMonkey, SurveyGizmo, those kind of online tools that you can get. I've had some pretty good experiences with them as well. Just with most other things, you tend to get what you pay for. If you want to do the free tool, you'll be able to put something together, but maybe it won't have some of the bells and whistles that you'd really like. So a slightly evasive answer, but just to give you a sense of the range of things that are available.
Adam: There were some folks in the audience that are struggling with Net Promoter Score. Is that a good thing? Is it a bad thing? What's your experience with that?
Caroline: It's both a good and a bad thing. You'll find people who will say their Net Promoter Score is the ultimate question. I think perhaps I should just step back from the question very briefly, not everyone will be familiar with the Net Promoter Score, just to explain what it is. It's not about Internet, it's about net in the sense of if you subtract one figure from another you get a net.

So a Net Promoter is get people to assess whether or not they'd recommend us to their friend on a scale from one to 10. We'll count people who give a score of nine or 10, which is to say yes, I'd very much recommend you. They're regarded as the promoters. People who give a score up to six, I think, six or below, are regarded as detractors.

The net promoters, you subtract the percentage of detractors from the percentage of promoters, and you have your net promoters. And you ignore everybody who gave you seven or eight, because they're in favor of you, but they're kind of lukewarm in favor of you.

Some organizations find that that one score, it's convenient to boil everything down to that one score, and just monitor that one score, and that is a key performance indicator, possibly the only performance indicator for their organization. And if you're in an organization that values Net Promoter Score that much, you'd better get behind it and measure it and really work with it.

I'm a bit skeptical because I tend to be someone who thinks that user experience is very multifaceted, and boiling it down to just one number possibly doesn't give us a full picture. Also, if you have a bad number, it tells you you've got a bad number, but it doesn't tell you the why of that bad number. So that's also a bit tricky.

But on the other hand, maybe that gives you the opportunity to come in and say, "I want to use some of my other UX techniques in order to delve into that number." That's what a Net Promoter Score is.

Just from the point of view of using a Net Promoter Score as a question in a survey, we have to ask whether that question means as much to the people answering it as it might to the business. There are some things where "I'll recommend this to a friend" is a really important thing that people would actually do. But there are other things where you'd never recommend it to a friend because you don't do recommending, and you certainly don't do recommending of those type of things.

So you might actually be very enthusiastic about the product, but you just might not ever feel the urge to recommend hemorrhoid cream to your pals. You know? That's not then giving a true measure of the value of that product. I have my skepticism about Net Promoter Score.
Adam: A couple of the questions people asked about open-ended questions, you were talking obviously a lot about the quality of the questions and how they get people to engage with the survey. Chris wants to know what the best way to improve open-ended questions is.
Caroline: So open-ended questions are where you give someone a big text box and they can type in what they like. It might be something like, "Tell us what you like about this service." First of all, you need to make sure the question you ask is an open question in the sense of giving an open answer. If you ask a question that can be answered with "Yes" or "No", people might very well be inclined to answer "Yes" or "No".

I remember one survey which said, "What results are you achieving with this technique?" The answer came back, "Good results," which was sort of an accurate answer, but not very helpful. You need to try and use the same sort of techniques in creating an open question that you might use on your probing on a usability test. You're trying to get somebody to open up, "Tell me more about that." That gives a cue that you'd like a longer answer. Same type of techniques there. Make sure there's something open.

Make sure the question's interesting. I've covered that a bit in the seminar as well. If it's something that's interesting and engages their imagination, they're more likely to want to give you a longer score.

But also, be wary of the amount of burden you're putting on people. Answering open questions is actually quite time consuming. And also, if your respondents include people who perhaps aren't that articulate, or are answering in a second language, or are just a bit rushed then they don't necessarily have the time to compose a lot of open answers. You want to think about the burden on them, and not give them too much actual typing to do. Be a little sparing on how many of them you ask people.
Adam: Nicki wanted to know about your thoughts about open-ended questions with an edit box for feedback.
Caroline: Right, so that's the same thing. You're asking someone for general feedback. Do you want to guide them to the sort of feedback you're interested in, or do you want to just throw it open to the world, and say "Feed back whatever you like." Those are the sorts of questions you need to ask yourself.

Which type of feedback is most valuable is a compromise between what do you know or think that users want to tell you, and also what do you know or think your stakeholders will actually make use of? So just giving people a box is an invitation for people to tell you what they think, but there may also be an implicit conversation that says, "Now I've told you my whole problem, I expect you to do something with it."

If they're going to pour out their heart and soul about some issue, and then it's going to be ignored, that could also damage the long-term relationship with that customer. It's a sort of balance of considerations between guiding them to something specific, but also giving them a chance to vent or praise or do what they want to do.
Adam: Colin asks about survey samples that are provided from what he describes as a big panel where participants with prizes, etc. Folks being rewarded to complete surveys like that, they're often rushing through without thinking carefully. Any tips for making sure that you're getting the best data from samples like these?
Caroline: That's been such a hot topic this week in the world of the survey methodologists and market researchers. There's a blog that I read from the Survey Geek. He posted about a panel that's been in a recent market research conference, where they had a panel of what they called Professional Respondents.

Professional Respondents is not a polite term, really, in the market research industry, and it's regarded as being people who are kind of gaming the system, who are answering as many surveys as they possibly can in order to actually make real money from it, rather than a little bit of pin money.

The challenge for the industry of the professional respondent is that some people are rather cynically exploiting it and are signing up to a zillion panels, maybe answering the same survey many different times. When you get your sample, do you know if you're getting the genuine, real folks out there? Or are you getting people who are just desperately trying to make money at it? That's perhaps the underlying problem of the panel.

And it's known that there are some of these people out there, and it's known that some people will just try and pretend, "This survey's about owners of BMWs. Well, today I'll pretend my rust bucket is not just any old rust bucket, it's actually an ancient beamer.

I've done quite a bit of research with respondents who are members of panels, and my experience of them has been they're really nice folks. They may be trying to earn a buck or two, but nobody ever got rich from being a panelist. They're probably trying to be as honest as they can, providing you're not giving them loads and loads and loads of really boring questions.

And so partly in response to that post on the Survey Geek's blog, actually, it was from a colleague, I wrote my own post, which is to say "Why might your panels be cheating you?" That's on my survey design blog myself. Because I tend to think that most of the panelists are just people who happen to be a bit opinionated, and don't mind being paid a little bit of money for answering a few surveys, maybe a few surveys a week.

After saying all that, that would then say when you buy a sample from a panel, you might be getting some professional respondents, you might be getting people who think that answering surveys is fun, and perhaps you should be perhaps a little skeptical, but still use them, because they could be a way of getting in touch with an audience that is hard to reach any other way.

The people that you can't easily recruit just by hanging around outside your local supermarket or something with a clipboard. It might be that getting that sample in is the only sensible way to do it.

And also, bear in mind that enormous numbers of highly respected brands are using those type of samples to test things out and getting good results from them. So why shouldn't we as well, you know? It's an interesting question.
Adam: Susan wants to know when you're thinking about the number of responses you need to be confident in the data set, is there a lower limit to the number of responses you need?
Caroline: It's a challenging question in statistics, the sample size, just as it's a challenging question in user experience. I mean we've had debates for years about how many users are enough for a usability test, and in some ways, the same thing applies in surveys.

So it depends on the strength of feeling that you're trying to investigate, and there's statistical calculations that will say that if you want to detect an effect size of a certain level, then you can put numbers in a formula, and it will tell you what sample size you need. Then you have to think about your response rate, and say if I need 300 responses, and my response rate is 30 percent, then I'm going to have to do 1,000 invitations in order to get my right level of response.

The levels of response you need from a sample size calculation are nearly always way lower than what you might call face validity. Which face validity is, "Do your stakeholders, and not statisticians, believe in it or not?" And that's also something we're familiar in with UX, our stakeholders often don't believe that the three or five or eight people we have in the usability test is going to be good enough, but we know it is. But how do we convince them?
And the same can be true that people can say, you know, "We have a customer base of 300 million people, how come we can get a decent, reliable, statistical result from sampling 1,000 people? From that, all I can do is say, "You kind of maybe need to look at the number of samples that national political polls use in the US," Gallup Poll or whatever, probably looking at 1,000 people, and they get reasonably accurate results, within three to five percent.

If you can stand a little bit of risk, if you can stand to be not absolutely accurate, then you can get pretty good results from quite small samples.
Adam: More folks looking for you to sort of quantify their efforts, the team at SuperValu is asking if there is any general guidance regarding the number of survey questions per screen.
Caroline: Right, and I've been keeping screenshots of a lot of surveys over the last few years. I've noticed that with tools like SurveyMonkey, when people develop those sort of surveys, they often go for large number of questions on one page. When people are using the tools that I was mentioning, like the high-end market research tools like Confirmit, they tend to go for a very small number of questions per page.

Both of them have their merits, and the real expert in the experimental effect of survey design is Mick Couper, that's spelled C-O-U-P-E-R. His book, which I highly recommend to people, is called Web Survey Design. I made it one of my books of the month on my blog last year because it's such a highly respected book. He goes into all considerations of balancing out, whether you should have a page design, or whether you should put it all on one page, how you should change pages.

In the end, it probably doesn't really matter, providing people don't feel overwhelmed. They can feel overwhelmed in a number of different ways. One of the ways they can feel overwhelmed is if you give them too much on one page at once and they feel, "Gosh, that's a long page!" Another way they can feel overwhelmed is, "I've answered page after page after page, and I don't seem to be getting there."

It's rather like a design of an application or a website, it's about giving people a sensible chunk of stuff to do, that's not too much for them to get their heads around at once, and doesn't seem like keeping them at it for too long. So it's a question of balance, and I'm afraid there is no definitive answer, like don't put more than three questions on a page. It's just not that simple. It's a question of bringing your design skills that you're familiar with using in other parts of user experience to design a good experience here as well.
Adam: Our friend Cliff wanted you to say a bit more about a BBC example that you spoke to in a seminar. And he suggested it sounds like they might have used a boilerplate set of demographic questions. Does that in itself tend to undermine credibility? Might survey respondents or site visitors think the only reason they're asking for that information is because they in turn have information to sell?
Caroline: It could be a question of trust. One of the things I talked about, about enticing people to take a survey is, "It's a balance." It's another question of balance between do they trust you with their information, how much effort they've got to put into it, and how much reward they're going to get back from it. Mostly, we're offering people the reward of feeling good about telling us stuff. People don't really feel good about sharing personal information. They've become very wary of it.

So in the case that I gave of the BBC survey when they asked me for some demographic information, I was making the point that I'm a big BBC fan, and I do trust them to manage my data very properly, so I was willing to give them that. But in other circumstances, people could be very wary, so hitting them with too much demographic can undermine that level of trust.

The other problem with answering demographic questions is they're quite boring to answer. Going back to the point I was making about the professional respondents. One of the things that makes you start feeling that you're being bored is where you're being asked again and again for the same sort of "Who are you?" type questions, when the reward you've been offered for the survey is we'd like to ask for your opinion.

My age isn't part of my opinion, my age is just a sad fact I have to live with, and I'd just rather it kept getting more than stopped getting more. There's nothing interesting, and it's not my opinion. It's sort of dull to ask people about too much demographics.

You need to try and keep that to the absolute minimum you can, consistent with doing some things like checking "well, did the demographics of this sample approximately match the demographics of the target audience we're trying to reach?"
Adam: Alicia wants to know if you ever use surveys in conjunction with another UX method that provides extra value, perhaps using the same participants.
Caroline: In fact, many of us use surveys with every usability test. Just yesterday, I was running a usability test, and I was using one type of survey, which is the variation of the Microsoft Reaction Cards that people may have heard of. If you haven't heard of them, then I'd recommend having a look at Carol Barnum's excellent talk, which is available on Slideshow about using the Microsoft Reaction Cards.

That's like a little survey, so part of my test was I just wanted to make sure that I captured some data that was consistent. Another tool that you might be aware of is the System Usability Scale, SUS, which is another well-known little survey tool or questionnaire tool that people use a lot, right there as part of their testing.

We're used to using little surveys anyway, but also there's this idea of triangulating, which is to say, "I'm going to run a usability test on this. Perhaps I could do a survey alongside to see if the sort of things I'm seeing in my small sample are actually replicated in larger numbers."

I can't emphasize how useful that is. This is why I've become a real fan of surveys, particularly for organizations that are becoming a lot more mature in their user experience. After a while, you start to realize you're seeing a lot of things in your small scale usability test and other one-on-one activities, and you'd like to know, "Are they really replicated out there in the same proportion that I'm seeing in these small samples?" That's where a survey can be really valuable, is helping you check those numbers against each other. That's where I think they really start to be worthwhile.
Adam: Paula wants to know if in your work and your experience, have you ever seen a true payoff in designing a complex survey, where there are things like dependencies, show if, the pulling in of previous responses into the next question, that type of thing?
Caroline: I see a lot of them in market research that clients want to test, "Well if you were interested in this, then I want to ask you that or the other." You can do them, but I'm not at all sure that they're as worth the effort. You can end up in a bit of a morass of complications, where the complications almost become an end in themselves, and the survey has a tendency to get quite long.

So a bit of skip logic, which is like the if-then, can be very valuable. If someone has told you that they have a religious objection to alcohol, don't ask them a whole lot of stuff about their drinking habits, it would be insensitive and impolite. Just let them skip over all of that. Those type of things where there might be a few questions which are just going to be completely irrelevant and annoying for somebody.

On the other hand, too much logic in there, and maybe you're going to lose sight of why you're really doing it. Maybe you should just let some people jump out of the survey, and then catch another group of people another day, rather than having too much complication.
Adam: Nicki wants to know if you can tell us anything about presenting a survey in person. The example she offers up is if you're at a sales kiosk or if you're at a conference.
Caroline: Right, so it's a great time. You've got customers, perhaps they're quite rare people, and they've turned up at some event and you think great, I'll catch them here. You do have to be so careful of their time. Good, short surveys are so important, and when they're right there in person, it's even more important, because they're not there to do what you want to do, they're there to do whatever they need to do that day.

You do have to absolutely prune it down to the absolute minimum you possibly can. Respect their time. Also maybe respect the fact that not everyone will want to answer there and then. You might even do a little bait and switch. Give them the opportunity to answer a couple of questions to find out that they're really interesting, and then say, "Have you some more things you'd like to tell us?" Then give them a card to take away with maybe the URL or an email address so they have an opportunity to add more if they want to.

You'll lose most people, obviously. I mean, you'll get a very low response rate to that, but that might be better than alienating people right there and then, and giving them the chance to open up a conversation later, rather than them ruining the conversation now, could be the way to go.

Also, I've also been to conferences where I felt a bit overwhelmed and bored, and would've loved the opportunity to sit down quietly with a little survey for a few minutes. Maybe that's just my type of nature. So yeah, you've got them there, you can catch them. If you're careful about their time, then they will actually do things for you.

And one of the best examples I saw of a survey at a conference was at a technical communication conference, the Society of Technical Communications Symposium, and STC people, the technical communicators, all love words, and many of us love word games. And the designer of that survey had included a little game to play. It really hooked people in, and then people would add a couple of extra questions, they enjoyed doing it.

It was a great way, because it was very well designed, just a little game, and people did really respond to it, plus of course the prize drawing that went for it is another big incentive. People do love the opportunity of winning an iPad or something like that. At a fairly small conference, where they can see there's only a couple of hundred people, they think, "Yeah, I might have a good chance of that." They can see the actual item right there. That's another way of getting to increase the perceived reward and more chance of them responding.
Adam: Very interesting stuff. You've got a book coming out through Rosenfeld Media later on this year, and one of the things that folks can do, they can go to the Rosenfeld Media site, and actually sign up. Rosenfeld Media will let them know when it's available. I don't want to speak for them, but typically Rosenfeld is pretty generous with aggressive discounts and free shipping and all kinds of cool stuff like that. Any other reason why they should sign up to learn more about when your book's available?
Caroline: Well I mean, going to the Rosenfeld Media site is a good idea in general, because Lou really encourages all of us to blog about our books and respond to questions and those sort of things and it's a really good policy. Plus there's some other great Rosenfeld Media books coming out.

But the other thing directly related to surveys is that I do blog about surveys myself there, in general, from time to time. Of course, being very specific, I put a post on the site that has got the slides embedded from this talk, and it's also got a selection of various resources that I mentioned within the slides, so it saves you digging around in the slides for a link or whatever. I just put the links there together, so you can simply click and be away.
Adam: Excellent. Caroline, thanks for circling back with us, we really appreciate your time.
Caroline: It's been a pleasure to be here, Adam, thanks so much.
Adam: For those listening in, thanks for joining us, and thanks for your support of the UIE Virtual Seminars. Goodbye for now.