Episode #253 Cyd Harrell - Doing "Pocket Research" to Learn About Your Users’ Lives Live!
Mobile phones are like research platforms in our pockets. With the right strategy, we can quickly understand our users’ behavior, wherever they are. And given the ubiquity of mobile usage — even among hard-to-reach populations — we as UX designers are especially poised to make our lives easier while designing better products. That is, if we actually do the research. Fortunately, Cyd Harrell knows how to gather data without breaking budgets or extending timelines.
In this talk, you’ll hear how to:
- Choose the right mobile methods to answer deeper UX questions
- Set up a diary study, experience sample, and SMS and voice-based survey
- Identify physical space triggers, and use them to gather more user data
- Build research into your process without breaking the bank
Cyd’s been doing remote research since 2007 while at Bolt | Peters. She even developed methods to broadcast remote research sessions to observation teams. Today, as the UX lead for Code for America, Cyd regularly performs research on mobile phones from low-income residents through smartphone-happy elite populations.
Cyd Harrell: I had an interesting experience last December. Have any of you ever gotten pickpocketed in the smartphone era? Raise your hand. So this happened to me. And I should probably say pick-laptop-bagged, because somebody reached right into my Timbuk2 bag and grabbed out the little pouch that contained both my wallet and my keys.
And one of the major thoughts I had was thank goodness it wasn't my phone. I still had my phone. Now, when this happened, I was on my way to have lunch with a relatively new business acquaintance. And I didn't feel a thing. So we had a nice lunch. We talked about work we might be able to do together.
And then we got to this awkward point where I discovered that there wasn't any wallet in my bag. And I said, ha ha. Hey, I know we don't know each other very well, but do you think you could pick up the check today? And of course my friend said sure.
Well, right around this time, I received a highly targeted, very customized survey of my recent purchasing behavior via that phone that I still had. So what actually happened was that my credit card company sent me a text, saying hey Cyd, are you getting checks cashed at a check cashing store in South of Market in San Francisco?
And I said um, no. And they said, well, did you just try to spend $15 on chips and sodas at one of the corner markets in San Francisco? And I said no. And they said OK, we're calling you right now. And we're going to get that credit card replaced for you.
This was a pretty great service experience, and one that wouldn't have been possible until pretty recently. It was also something I started to realize as a researcher, wait a minute, I just took a survey of my buying behavior, which is a pretty personal thing, right on my phone. Why did I do that? And what does that imply in terms of what I could ask participants to do as a researcher?
So I work at Code for America. I've actually gone full time there now. This is an organization that tries to bring together technologists and American cities to make a lot of those experiences that we think about, everything from the DMV to getting a building permit to getting a bus stop change near your house or a stop sign or any of those things, to make all of that better through technology. And one of our big concerns is working with people who have less access to the internet, perhaps, than a lot of folks in this room, whose maybe primary access is through their mobile phone.
And so it's been really important to me to start to discover not just how we can do usability research in terms of whether mobile apps are working for people, but how we can research deeper things about people's lives and how we can involve mobile phones in researching things like service designs that are more complex, that have a whole bunch of touch points. If you think about the kind of questions that we might want to ask in mobile research, these are pretty interesting ones. And the answers to these are not as clear as you might think.
There's been a bunch of research recently showing that actually a lot of users don't understand the hamburger menu outside of close tech communities like ours. There's been a lot of questions about font size and whether clicks on mobile are real clicks or whether those are kind of fat finger, unintentional clicks, and whether they're really valuable to the people who are advertising. These are interesting questions that we actually have a lot of methods to be able to solve.
We can go a little bit deeper. Will people actually make purchases on a mobile phone? It's starting to change. I'm an early adopter, relatively. I would've thought I would never do it.
But a few months ago, I found myself buying children's pajamas on a Metro bus in San Francisco, because I was talking to the person who needed the pajamas-- that would be my kid-- and it turned out that the particular store that she wanted to buy pajamas from had a nice mobile site. And we got it done in five seconds on mobile.
Interesting. You wouldn't have actually had to be on the bus with me to know that that was a possibility. You could have looked at that interaction, perhaps, through work in a lab.
So we wanted to answer questions like these. It's actually pretty well known. We talked a lot about the methods to do this in my workshop yesterday. You need to recruit the right participants. That means people who care about the particular task that you're researching.
Often it means also people who have the right kind of phone or who have the right kind of access that you're looking for. And you can get to them through a number of ways. But if you have a set of people, say five or six or eight, who are interested in the task that you're looking at, and who you can either go to them or they can come to you, you can offer them a task in a whole bunch of ways. And this is one of my favorite pictures from one of my favorite blog posts ever about mobile research.
Because with a mobile phone you can do everything that we know of as UX research, from full-on live site tests to paper prototypes. And in fact, you can create these really simple paper prototypes just because a mobile phone is approximately the same form factor as a post-it note. Peel it off to interact, and you've got something going on.
Did anybody get the IPEVO for signing up early? You have such a nice little tool there. This is a great way to bring a picture of somebody's mobile screen right into your computer. And from there you can actually broadcast it to stakeholders, which for me is a critical part of mobile research practice. So if you were able to see what somebody's doing, you want the people that you're working with at your company to also be able to see it and to be involved in the research and possibly to be involved in real time, so that they can send in follow-up questions or even initial questions to you to pass on to the participant, so that you can have a full and rich research interaction.
Another really easy way to get at this, another one of my favorite hacks. This was pioneered by Jenn Downs at MailChimp a couple of years ago. If you want to do remote research with somebody who has a tablet or a phone and they also own a laptop-- this isn't going to get you to the mobile-only users. But if you have somebody who has this, or if somebody can come to your lab where you have a laptop, you can use the front-facing camera of a laptop to provide a full stream of the research session that you're having, including someone's mobile screen and their fingers in gestures, and you can still hear their voice just by spinning a laptop around and having them give it a hug.
Which turns out to be pretty comfortable position. They just rest their elbows on the table. They hold their phone. It's a pretty natural position based on where they are.
So any of those questions that we were looking at. Does the hamburger menu work? Can people understand the affordances that I'm designing into this interface? Does my purchase flow work on mobile? Are people able to find the things that they want? Pretty easy to answer with these kinds of methods which come out of classic usability testing.
So great. There's a lot of really interesting questions that go a lot farther than this. When are people happy? What makes them happy? Not just in terms of interacting with a piece of technology, but in their life in general. What do we know about how often people actually check their social media messages?
So Facebook probably has data about that, but they probably aren't going to share. Also, most people would probably use multiple networks. And most people probably aren't that eager to tell the truth about it in a survey type of setting.
What do we know about tasks they take more than one app to accomplish? We use a couple of these as reference tasks in the workshop yesterday. For example, if you want to go visit-- there's a really cool art piece in Denver that I like. It's a big blue bear, and it's called I See What You Mean. So if you want to go see this particular cool piece of art and get lunch with a friend somewhere that has great coffee and great food, you end up using three or four apps to form that simple task.
So how do we research stuff like that outside the lab setting? It's easier than it used to be. But I wanted to share a little bit with you about how it used to be.
So my first mobile study was in 2008 working for a major car company that wanted to make a concept car for the 2009 Geneva Auto Show. Now, this is a picture of one of the participants. And you might note, interestingly, that he's got a pretty fancy car, but he's got a flip phone. That was kind of where we were in 2008. IPhones existed, but they were really kind of exotic and not a lot of people had them.
So the company hired us to look at what do people actually do with their mobile phones when they're riding around in cars? And they said, we don't think people tell us the truth when we ask them in surveys. They say, I never touch that when I'm in my car.
As it turned out, they were right. People were not fully telling the truth, and maybe even didn't realize the truth about this. But what we had to do to figure this out was to arrange to ride along with participants on trips that they had already planned.
So this gentleman was taking his dog to the park. We rode along with people who were dropping their kids off at school or who were commuting to work. And we sent two researchers equipped with a high-resolution camera, a laptop with one of those Sprint cards that were in big use then, because it was hard to tether your phone to your computer for constant internet access.
The researcher with the laptop sat in the backseat and broadcast a crummy video from their webcam to the engineers in Germany and on the east coast to the United States who could then send in questions. The researcher with the high-res cameras sat in the front seat, took photos like these, and videos of what people used.
And we ended up categorizing the photos and creating a Flickr-based photo browsing thing, where we had tags like one hand, zero hands, pets, children, distraction, all of this. And to support the team of two researchers that went along on this, we had a home team of two people. So it took four researchers to do one of these sessions at the time.
But the information that we got was really, really interesting. And they were able to see that there were all these interesting emotional things about the way that people interacted with phones in the car. People were really upset when they got a phone call in the car, a lot of them, because the car was the one place that they could be alone and peaceful, especially if they got to drive through somewhere pretty. But at the same time, if they were listening to music and they had a device like an iPod or a phone that had music, they couldn't keep their hands off it. Because they could adjust it, they did.
And so they built a lot of these features into this concept car that went to the 2009 Geneva Auto Show, based on this deep understanding with four researchers. It was an incredibly expensive study. We charged them a ton in 2008.
So what's changed here? Well, for one thing, if you follow the Pew Internet and American Life Project-- which if you're interested in mobile technology is a great idea-- they release data three times a year about smartphone adoption in the United States. And in their recent January release, they said, we've got 58% of American adults using smartphones at this point. And that's out of 91% actually using cellphones. So it's no longer an exotic and interesting thing. Things have tipped over.
One of the really interesting things is that the digital divide has started to change in terms of how it operates. So for many people, a smartphone is filling in a gap if they don't have access to home broadband. And particularly in vulnerable communities, there are a lot of mobile-only users. And so in my space in civic tech, it's tremendously important to understand how access works for them. This access isn't as good as having full access to a desktop computer, but it gives us an avenue to approach doing research with populations that were very difficult to approach before.
I do want to say, if I back up to the story about my credit card company contacting me, that didn't rely on any advanced smartphone technology. They didn't send me a picture and a web survey and a whole bunch of complicated things that I had to open an app to do. They just send me an SMS, which is one of the most basic technologies that's available on practically every phone, from a disposable burner on up.
Thinking about the field kit that we take out for ethnographic research-- internet-equipped computer, high-res camera, duct tape, buckets of cables, extra internet access, backups, multiple researchers-- you could really summarize it as recording devices, survey tools, notes tools for the researchers. It's like an entire laptop bag full of stuff, right? And if you are a researcher, you probably prepare detailed kits and check them multiple times.
And here's the thing that's really new in 2014. It's pretty much all available as part of something that people carry in their pockets, of something that when people get their wallets stolen, they're like, thank goodness it wasn't my phone. So how can we use that to get closer to the really interesting pieces of people's lives?
Also back in 2009, possibly prematurely, a Harvard graduate student named Matthew Killingsworth decided to take advantage of the fact that iPhones had recently been released and study what makes Americans happy. That's a giant, broad question. And the way he decided to do it was to use a classic psychology technique called experience sampling, where you check in with someone at fixed intervals, but not necessarily intervals where they know, and ask them, how are you doing?
And I signed up for this study in 2009, because I thought, yes, this is really, really cool. People are doing interesting things. I've got an iPhone 3G. I'm a researcher. I want to see how this works. And it was a nightmare.
I couldn't complete the study. You get a really cool reward if you last more than two weeks in the study, which is that you get this chart well before the quantified self-movement, actually, of, when are you happiest? And what are the things that contributes to you being happy?
I'll show you some of my results in a minute, because I've been doing it again now that I have a phone that's actually capable of running a basic survey a lot faster. So one turns out I'm happiest on days that are not Tuesday and Wednesday, after running this for a few weeks. I wouldn't have guessed this. But I went and looked at this and correlated it with my calendar. And sure enough, Tuesday and Wednesday are days full of complicated meetings that I don't have time to think in between.
And just by getting these three pings a day-- hey, fill out a survey. Tell me how happy you are. Tell me what you're doing. He's been able to figure this out. Also unsurprisingly, when I'm doing something that I don't want to do, whether it's something that I also have to do or don't have to do, I'm a lot less happy than when I'm doing something that I want to do.
So far, so good. This seems to correlate with things that we would understand about happiness and with more general happiness research. Location. This is great. I pulled these out last night. And you can see that I misspelled hotel during a time when I was really happy. So you can just guess whether I was having a good time with people I went out with last night, and we got back.
But at the same time, this longer-term result of what I was doing when I was happy shows was a bunch of things that have been coming up that have been freaking people out, like I'm not happy when I'm caring for my child. Someone I absolutely adore, but in terms of whether I rate myself as happy when the primary thing I'm doing is childcare, not that happy. Not as bad as looking for a lost item or doing housework, but when I'm happiest, interestingly, is when I'm quiet and alone and reading, or just relaxing and talking.
So this is kind of interesting for me to have these results. But at the same time, he's correlated this across thousands of people who have participated. And suddenly, he's got a psychology study with a tremendous amount of data points just based on two simple affordances in the mobile phone. So he sends all the prompts by SMS. And he collects all of the answers just by a mobile web form.
Those are really the two hardest things about research, right? You have to provide the prompt somehow, whether it's a paper prototype to a timing prompt to a question. That's all on you as the researcher. And you've got to collect the response in such a way that you can analyze it.
Mobile phones provide incredible ways for us as researchers to do both of these, whether we're asking a question about hamburger menus or whether we're moving on to things like, why don't people participate in civic decisions that affect them? How come, when I'm walking down the street and I see one of those posters that says, this house is about to be remodeled, if you want to, you can come to a meeting at noon on Thursday at City Hall and make comments on this. And that's really if I read the fine print, right?
But-- and this is one of the questions that I'm personally looking at right now-- if I could use people's mobile phones to get closer to answering the question of what they want in that situation, or if I could even use the mobile phones just to figure out do they see these crazy posters, does anybody read them? Can I check in with people once a day and say, have you gotten any messages from the city that you live in that you noticed? That would be a lot of really useful data.
So we did a trial of the experience sampling method in my workshop yesterday, which I do every time I teach it, because I like to think about, what is the experience of the participant when I'm creating research? So I ask everyone who was in my workshop to set a timer on their phone. And I asked them to take a walk of exactly seven minutes, and stop and take the most interesting photo that they could take at that exact point.
They went with a partner. And the idea was then to survey each other and to fill out a little form that said, OK, here's the question that my partner asked me. Here's what I answered to the question that my partner asked me. Here's why I thought this photo was interesting with a little three-word caption. They key piece between the idea that it's the most interesting photo within seven minutes
Of the Marriott and the idea that it's the most interesting photo exactly seven minutes from the Marriott was that I, as the researcher, was in control of when they did it. And when I talked to people when they came back, they said, you know, this wasn't all that comfortable. I kept seeing interesting things that I could take a picture of, a picture that I will like and be proud to post.
And because I wasn't going to collect 75 phone numbers and send out an SMS, we did this via Twitter. So they were going to post this semi-publicly, unless they didn't have a Twitter account. And they said, you know, I saw a lot of things on the way over there that would be really cool and I'd be proud to post. And when I got to my seven-minute mark and my timer went off, I wasn't very comfortable with the photo options that I had. And people think of me as someone who posts good photos, and so this was awkward.
And I think that this is one of the things about this particular method, is that it's researcher driven. The participant isn't in control, and it can be uncomfortable. And yet the photos that they came up with had a lot of beautiful, vertical Denver, with captions about the architecture and about looking up and seeing the sky after being in a hotel for most of the day.
We got some really sweet expressions of loneliness, with captions about quirky, urban patterns. We got to some pictures that were unique to Denver. And a lot of the captions in discussions were about Denver.
And if you look at this though, there's a whole bunch of snarky responses to, why did you take this particular photo? Because you made me. And that's OK. This is actually probably not the way you would to design a study. In fact, there's probably about 50 things wrong with it if you're thinking about designing the experience of the study for a participant.
And that's really important with some of these deeper studies where you're going to be interacting with someone over a long period of time. So I always think about designing the experience of a study for my participant. And I'll talk a little later about how we preflight that and how we test that for people.
But what I asked people to do was to get out of a comfortable chair, take a walk on a cold day, interact with at least three apps-- their camera app, Twitter, and the web form, which they had to go to a little Tumblr that we had for the class and link into and fill out-- without a very core motivation. I did offer them some prizes. But they were token prizes.
If I were to design something to do this, I would probably put a lot of effort into making it a lot simpler. And I would also put a lot of effort into, instead of a 10-minute introduction in a crowded room like this, making sure that I got really explicit consent to participate in this particular way, and that I also gave the person terrifically explicit instructions and guidance and a way to get help as they were participating in the study.
I always feel that when someone shares a slice of their life with us-- and often in research it's sort of a one or two hour slice of their life experienced intensely-- it's a real privilege. And when they share a series of little slices of their life in which, in fact, we may find out even more, we want to make sure that they don't feel harmed by it, that they know what's going to happen, and that we can design it in a way that's comfortable for them. Not least because-- for example, with my first experience with the Happiness Project-- if it isn't comfortable, the person's probably going to bail out and not complete the study and you're not going to end up with the data that you want. But I think it's more important that as a human, you're taking good care of them when you do a good study design and you make sure that they have a way to get their questions answered.
So there's a very interesting application which has been being developed over the last couple of years. It's called dscout. And they D stands for diary. And it's about a slightly different technique for longitudinal research. And that technique is a diary study.
Has anybody here ever done a diary study in the typical way? OK, so a couple. So it's a ton of preparation. You would sent out, usually, a book containing a set of prompts. You might send out materials. You might provide the person with a nice set of pens and colors if you wanted them to draw things. You could ask them to do all kinds of things.
Typically a diary study is self-paced. And a person participates at intervals that you suggest, but at a time of their own choosing, filling out how many people they've interacted with in the course of a day, or talking about your brand and how many interactions that they've had with it and how those have been. They're really interesting in the course of a long term service design for a service that has a long duration. So if you're looking at somebody who's going through a process that takes a while or through schooling or something, where they're going to use particular resources over and over again, it's a really useful technique.
So dscout provides a way to do a simplified version of this with a phone for iPhone or Android. It allows you to customize a whole bunch of prompts with a photo, with a set of questions, and to ask people to become your scouts and complete a mission. It doesn't have the high touch interaction of receiving this package in the mail, where it's like oh, I got a really nice little red book that has a beautifully folded set of instructions inside and a couple of pens. And now I'm really excited to do it.
But on the other hand, I can do it anywhere. So we did a little trying out of this as well in my workshop yesterday. And I've also been running a study back home about urban signage and how people use wayfinding in the course of their commutes.
So there's quite a bit of richness to what comes out of this. This was one little snippet, as they call it, from somebody. So what I asked my participants yesterday to do, from the beginning of the day, was to sign up for this study. And then to, every time they went and got a snack or lunch, to take a picture and talk to me about it, tell me a couple of basic things.
Were they hungry? What did they have? And did I have any trouble getting it? I didn't think there would be a lot of trouble getting food at this particular conference, which is known for its great food, or in this particular city. But I'm not sure how easy it is to read on the big screen. Somebody actually ended up sitting upstairs, and having a hard time flagging down a waitress, and almost being late getting back to the workshop after lunch.
Well, that was kind of an interesting piece of information, about just how long lunch took and what they were doing. This was all kind of low threat. They weren't telling me about tracking something medical and how they found information about different things in the course of getting treatment, which is something you could use a method like this for.
This was a little bit of a sad one. This person decided that they were going to go mobile-free for their lunch. And they just wandered around, and they looked for someplace good to eat. And they didn't end up finding one. So they went into Chili's, which they were familiar with, and they ordered something where they knew that they liked the sauce. But it was a little bit bummed about it.
What I thought was interesting about this one, and this is one option that you can do within a diary study-type of paradigm on a mobile phone or off, someone was really thrilled to find a diet soda that wasn't caffeinated. OK. So that's an interesting piece of information about people's preferences.
But somebody else commented, and said, hey, me too. This is really cool. So these people were interacting without necessarily knowing who the other one was, just based on being participants in this study and talking about what they prefer in terms of sodas and drinks.
In my transit study, I'm getting pictures of all kinds of machinery when I actually asked for signs. I thought, wow. That's really interesting. These are the points that people find their way by, and I didn't know that. I didn't really know that at all.
But this is a really complicated piece of signage here, in terms of it's got a reader on the top. It's got three little sticker sign things increasingly down the side. And I believe that it's about waist high. So how you would actually read the three little stickers that run down the side of it is kind of an interesting question.
The person who was sending in this picture is familiar with the system and knows that they need to just tap a card on the top of that. But looking at this as something that someone would interact with as a visitor to a city or as a new person who moved to a city, it's a really challenging piece of interface.
I've got another one in a different city. Now, this person reported a lot. And the interesting pieces-- they said, this is the first of the month, so I have to tag in on this particular machine on the first of the month. And I know this.
Normally, if I was in a train station at the first month, there would be announcements over the loudspeaker, sort of an extra affordance to remind me to tag. But I'm not hearing them today. That's weird.
So that gave me a layer on the way that her city encourages people to do the right thing and tag their transit cards, beyond just this-- let's face it-- tiny, crappy little screen, where if I don't know what I'm supposed to do, it's really pretty challenging to figure it out. And she missed the presence of that extra help that came.
I wouldn't have known that otherwise, unless I'd been with her, and unless I traveled to these two different cities which are a couple thousand miles apart. I wouldn't have realized how payment mechanisms for transit are sort of de facto signage in some places, and how the instructions for them are really not workable. And the city's know this to a certain degree and are supplementing them with announcements over the loudspeaker and stuff like this.
This was just one of my favorite things that got reported to me, because someone actually went to the wrong floor of a location that they were familiar with because of signage that was confusing. And they actually were a little bit embarrassed about submitting it. But they said, yeah, I was going to my dentist. I couldn't figure out where things were with the construction signs, and I ended up on the wrong floor.
Again this is a piece of information about wayfinding that I could never have collected without spending time walking around with somebody and spending a whole day. And the reason I have it is because I was able to ask her to participate with her mobile phone, and she was able to do something that was easy enough that she could report this to me. So she could take a photo-- pretty easy. And she could fill out a web form-- also pretty easy.
And between those two affordances, I can get an incredible breadth of information about things well beyond, is this particular app working for me? Can I find the right song I want to play on a music app? Can I search video?
So there are apps now as well that are designed for storytelling, that have a lot of potential to be repurposed as research, that aren't necessarily designed as research apps. And one of the particularly interesting ones is called Blurb Mobile. Blurb the photobook makers, interestingly, makes a mobile app to string together photos and video and audio clips, and put together a narrative that you can then post for people.
This is another really interesting way to think about someone sharing an experience with you that's multi-app or that's longer than a few minutes. That's something that they couldn't do in your research lab if they came in and you just filmed them interacting with something.
Well, what about all those people who don't have a smartphone? So we talked at the beginning about how 58% of adults have a smartphone. That's a pretty high percentage. The people who don't are concentrated really heavily in the 65 and over age group, and also when people who are more economically vulnerable and in lower socioeconomic categories.
So this is something I'm kind of proud of that was actually developed at my home, Code for America, which is an app of sorts. It's not an app that you download to your phone, but we have a habit of referring to everything as an app. So it's something where you put a prompt in physical space, and a person can respond by text message.
So I think the one in the picture is a silly one. She's saying, well, what do you like about Salt Lake City? And you can answer by text message. But you can then send back and forth succeeding questions. So it's the ability to complete a survey.
It could be a company survey, right? What should we have in the vending machine? Text 22 for Coke. Text 23 for Snickers. Simple things like that can then lead into deeper questions.
And they have found as they've developed this-- and it was originally a project in a yearlong fellowship that then became startup. They found that if the first question is simple, it's easier to ask deeper questions a little bit later. It's something that can be done in multiple languages, because all it relies on is text.
And so there is an ability to ask even people who are walking around with feature phones, without smartphones, to give us deeper information about what they're doing. So this takes the web form out of the equation that we were talking about. You can still send a prompt via SMS that could be answered via SMS. But you can also put the prompt right there in physical space in front of people, which is a really interesting way to start a research study. It's sort of recruiting and first question in one in a way.
So how do we design for these longer-term studies to get this deeper information? And I do like to think of it as a design when you're setting up a study that you want to be a reasonable experience, a humane experience for somebody to participate in, and at the same time to give you that kind of information that you're looking for.
For my money, not everyone agrees with this, but I think that recruiting is the top success factor for any study. So if you don't have the right people, especially if you're looking at things like, when are people happy? Or how is this long-term service design working? It doesn't work to just grab somebody from the office and ask them to run through it. So finding those people is absolutely critical.
And sometimes recruiting them via mobile is one of the best ways to do it. Mobile forms I've been talking about them a lot. They are a lot better than they were three years ago-- a lot. This was a form that I didn't actually create as a mobile form. I just created a Wufoo form for a mini-survey for different talk that I was giving. And lo and behold, it works and looks very decent on a mobile phone.
There is also an application called Typeform that's meant to create native mobile forms-- although I heard from a couple people yesterday that they're tricky to use on Windows phones. But they are beautiful on tablets, Android, iOS. They present a single question at a time with a great big button. They're easy to read. People are not stuck sort of scrolling through stacks of fields.
So there's really no excuse not to use these. There's a few rules. And you might think of using one of these, perhaps, if you had a prompt in physical space saying, would you like to participate in a study? That would be one way to get the recruiting in there. You could then, when someone texts you, send back a link to a form to do a screener.
When recruiting, even off of mobile actually, I like to keep it pretty short. But on a mobile form, if you're doing a recruiting screener. Five questions is probably the maximum that somebody's willing to run through.
That takes-- and I looked at my dscout study yesterday, which was a great job of tracking time when people were filling out those little snippets about the food. So the average person took about 90 seconds to take a picture of the food item, talk about how they got to it, fill out the questions. Five questions can be done in 90 seconds, and people have the patience to do that.
One of the other really important things, especially if you're creating for something long-term, you want to get people like the woman who explained about how she got stuck at her dentist's office, and she was a little bit embarrassed, and she had a hard time because the signs were confused. That was a lot of information for her to give me. So to get people like that, to make sure that you have them, it really helps to ask an open-ended question in the screener-- something that they can respond to not just with a multiple choice answer, but they can choose how long the answer should be and how deep the answer should be. And you'll be able to tell something about them as a participant from that.
I like to use more conversational language in a recruiting screener than I might in a survey. It's not a scientific thing. And I want them to get a sense of me as a researcher-- that it's going to be a pleasant interaction with me, it's going to be conversational, we're going to be humane.
I've had tremendous success in recent years using Twitter as a recruiting start. And over the years, we figured out some things about how to recruit with a tweet that are kind of technical that I just wanted to share. If you're able to search on Twitter, if have one of the Twitter clients that allows you to search for hashtags or particular topics, you can find people who are talking about the thing that you're interested in and send them a direct "at" reply, saying I'm looking for people for a study. I'm paying an incentive, which hopefully you can, especially if someone is going to give you time over the course of a longer period of time, it's really nice to pay an incentive. It also helps get them interested in doing the recruiting.
So you need room for a link. You need to say what you're doing. And you need to say if they'll be paid. And those kinds of tweets have been very, very effective.
There's also a particularly effective way, and I am indebted to several friends in the research community for the idea that you just blast out, I'm looking for these kinds of folks. And there's kind of an informal agreement among researchers, that people retweet each other's social media recruiting appeals. And it works shockingly well, at least to the extent of getting the kinds of people who might be on Twitter.
One of the pro tips that I discovered after a couple of failures is that it really helps if you edit your profile to actually talk about the current study that you're doing. So that if somebody gets an "at" reply from you and they're like, this is legit, they click on your profile and it says, I am doing research on this right now. Great, sounds legit. Then they can make a clear decision about whether they'd like to participate with you or not.
For recruiting people who are in those more vulnerable categories, who might not see you recruiting tweets, who might not be within three or four connections of you on a social network, there are community advocates who are often really happy to connect you with people. And some of those physical space prompts can also be particularly effective. In offering a task, we're suddenly not constrained to the screen. We're not constrained to paper prototypes or really prototypes of any kind, although you could certainly do a longitudinal study of people interacting with an app that you have created over the course of a few weeks or over the course of the lifetime of the thing that the app is meant to serve.
But it could be something where someone is going through the process, let's say, of applying for a job, or finding a job, or applying for college, where it's something that's going to take a while and you want to understand, how much time are they giving to this. What are they using? All you need to do is structure a little bite of a task in a way that they can do it in a regular, periodic fashion.
So it helps for it to be a question that you can ask over and over again. But you could also vary the question. So the task can be almost anything, the task that you observe. But your job as a researcher is to script it in a way that it can fit the native motivation that they have for a task.
I've always loved this picture. I'm not sure why I'm not a punk girl myself, but I love her. And you can imagine a retail study where you happen to you got this girl recruited because she's in the right age range. And if you've only built a prototype where the blue sky path says, you get to a pair of purple pumps at the end of this, you're going to lose all motivation with this particular young woman.
There are circumstances in which it might work. And when I talk about time aware research, I also like to talk about her, that what if you happen to catch her the weekend before her prissy sister's purple wedding? Now she has a motivation to buy a pair of purple pumps to go with the horrible bridesmaid's dress that she has to wear.
But by and large, instead of asking her to buy a particular thing, I want to ask her to look for something that she would like. And that's a very much the case if you're looking for something deeper than usability. I like to use customize tasks and usability as well. I think it's actually a lot more valuable when you have somebody doing something that they care about. But when we're talking about deeper research we're talking about beyond usability, it's absolutely critical that the person how the connection to the thing that they're doing.
So if you script the little bites of tasks, if you allow them the space to have a larger task that actually fits them, that matters to them, if you are researching the college student and how they're studying something or how they're applying, you don't need to tell them which colleges to apply to. You want to know which ones they are applying to and how they're going about it.
I believe that for this kind of testing, it's pretty critical to preflight your test. And this is one of my favorite online applications for testing tests. And it said as you design your test or your research, you also want to usability test what you're asking people to do. One of the quicker ways to do it is to run it through something like usertesting.com. Or this is the instance we're grabbing the person down the hall can make a lot of sense.
Have some of you used usertesting.com. OK. Probably a quarter to a third of you. So it's an online service that provides 10 minute usability tests with a panel of people that they have recruited. So in terms of using it for deep research, you probably would not find that person who would be exactly the right person for your app or your task that you want to research.
On the other hand, it's a very quick and it's very cheap. And you can get a really good sense of whether someone can execute your task. And you get a 10-minute video of a person going through whatever it is. So I use it a lot of time as a sort of preflight for protocols of various sorts. And it works really well for that.
And then you capture what happens. And again, the sky is somewhat the limit. This is output from that dscout app with the diary study-- a whole bunch of snippets arranged just with their simple captions and notes. On the other hand, you could stack apps together in exactly what you want to capture.
You could do videos. You could have people write longer things. You could have simple photos. You could have a voice study, where everything was audio clips, if you wanted to study with people maybe who have difficulty using the touch interfaces on a screen.
And in the end, of course, it's all going to come down to analysis and discovering what happens. And like any other kind of qualitative methods, you're going to get a whole ton of data. You're probably to spend a lot of time with spreadsheets and tags and stacks of photos and doing the kind of analysis that you do to hopefully to get a really rich result.
When I think about why did I decide to spend my hour with you guys talking about this in particular, it's because there are a lot of questions that have been really, really hard to research until we started to have these little mobile phones in our pockets all the time. So it would've been really hard to ask people across the country to show how transit signage affects their commute. Shopping is something that I have done before in walkarounds with apps that are meant to enhance the shopping experience within a retail environment.
But I'm only able to do it in 2009, 2010 me if I'm physically present, and if I have a support team present with me to take care of all the network stuff, and somebody to hold the computer, and somebody to take notes. Now I have an ability to do something a lot cheaper, and in some ways, a little richer, because in case you haven't noticed, it's kind of an intimate relationship with our phone. I would have been really upset if the pickpocket had gotten my phone. And it would have impeded my day and my week to a much greater degree than the fact that they got my wallet.
Even though they got my keys at the same time, and I had to change the locks on my house, the phone would have been worse. And I will tell a lot of things to my phone. I will do a lot of things with my phone. And I'm used to interacting with people I'm intimate with through the phone, through text interfaces on this little piece of plastic.
And so I'm used to having the kinds of interactions that are a little bit deep. It's not weird to me, the way it might be where to sit down with you in a lab setting, and talk to you about things that are maybe a little bit personal. I'm used to doing a lot of those things with my phone.
Of course you have to be careful. But it wouldn't be hard, probably, to study food habits a lot more deeply than we did in a silly little reference study during a technique training yesterday. That's a really important issue to people.
We could figure out how to do a diary study about food. How many messages to people get in our current set of grocery stores, in our current anxious times, about how their food should be healthy, or what is healthy, or what they should or shouldn't do, or what might or might not make them happy? How many mixed messages are there? This would be a really interesting way to study things like that.
What about social relationships? I haven't done this yet, but it would be fascinating to do group diary study using people's mobile phones, if you could recruit sets of friends or families. Or someone suggested yesterday, what about teens and their parents? Wouldn't it be interesting to understand each other's perceptions of how they use these devices?
I know my child is not a teen yet, but she feels like I'm on my darn phone all the time. And I feel like she's on her iPad all the time. Neither one of us feels that way about our own use. It would be really interesting for someone to study family relationships as this is evolving.
What about animal care? What about a million of those things that are parts of our quotidian lives that are really important for us to understand as we develop products for people, whether their new technology products or other kinds of products. What about exercise? What about illness? All of these things are suddenly available to us in a much simpler and more profound way because of the little phones that we're carrying around in our pockets every day.
For me, this is the one I want to do next. This is one of those public notices that's so horrible from a UX perspective, as everything is wrong here. It's Saran-wrapped to a tree. And it's hard to read. And I can't even really get down to the fine print where it's asking me to do something, or telling me how I can contribute to a tiny, but perhaps significant to me, little public decision.
What if this said in giant letters, should this tree be removed, and a phone number. What if all of those public notices that you see on every building, on every liquor store, for every liquor license that might open, for every club that might open, for every house that might get remodeled or might get changed from its historic state, what if instead of this, we had a great big question in a really readable font and a phone number to start a conversation? How would that change the way that we relate to our cities? How would that change the way we relate to our neighbors? This is where I want to take research-- not on mobile phones, necessarily, but using our mobile phones as we go forward.
It isn't necessarily easy to set these up. But with a little duct tape-- and the tools are only going to get better over the next couple of years-- we can research practically anything with the little research platform that we're all carrying around in our pockets. So I hope you will all take on some of these bigger questions as you go out in the world and think about what you can find out. Thank you.
Audience Member: Hi. So I'm just wondering if you found any way to use some of these new mobile tools to do things like remote testing, or getting insights into things like mobile apps that you build? Because I work for a bank. And one our big challenges with our mobile banking platform is understanding how people actually use it in their day-to-day lives.
You had to the picture-taking thing. And I was trying to imagine taking a screenshot of your app when you ran into a problem, or at random moments in time, getting people to do that. I just wondered if you had any insights in research in that kind of direction.
Cyd: Yes. So it's possible to tape together a fairly traditional mobile usability session. But all of these apps also work with a screenshot as the photo. So I think if you were thinking about your design, and if it's something that people use on a relatively regular basis, you could set up a diary study where you send someone a prompt every night that's like, did you interact with your--?
Or maybe-- let's see. I think it would be great if you actually instructed them, any time that you get an error in our app, we'd love you to take a screenshot. And then we'll email you every couple of days to collect those and ask you questions about them, or a protocol like that.
And again, you'd want to get really explicit consent upfront. You'd maybe want to even have a preparatory phone call with people if they're going to participate in a study like that. Say, we'd like you to participate for a month. You're a bank, so you could probably afford to offer a real incentive, a couple hundred bucks or something. And we're going to have a 15-minute phone call with you to talk about how important certain pieces of this are and make sure that you're fully comfortable with what we're going to be asking you to do.
And then you can send them a prompt on a regular basis saying, did you encounter any issues? Someone asked me yesterday, and we talked about it, how would people feel if we sent them a prompt knowing that they had had an error in our app? I thought that probably wouldn't be all that comfortable at this point, but I haven't tried it.
I haven't had the opportunity to be able to say, hey, we saw that you had a problem checking your balance, or we saw you had an overdraft. That is starting to maybe cross a line. So that if you're just sending the same thing every day and saying, hey, did you have an error?
Tell us about what was going on. How did you feel? Did that actually prevent you from doing anything? What did you do about it? If you get three or four questions in your screen shot over the course of a month, you'll probably have a lot of data that would help you with that.
Audience Member: So you mentioned that people needed to have a connection with the task you're asking them to do.
Audience Member: So I love the example up there with the paper prototype pasted over the phone screen. And one of the methods we were going to do-- so we work on mobile apps where people find apartments. So not everyone's really looking for an apartment. And so how do you-- if you want to go to Starbucks, and you want to say hey, try this out. How do you say, get excited about looking for an apartment? Does that make sense?
Cyd: It totally makes sense.
Audience Member: How do you make that connection?
Cyd: So it's going to be a low hit rate going into Starbucks and seeing if you can find anybody looking for an apartment, unless you're in San Francisco. And then just try a fancier coffee shop, and you'll be fine. But I think that what you would want to do is post something somewhere saying, I'm looking for apartment seekers. And perhaps if you could make that post happen in, say, the apartments for rent section of Craigslist.
Normally, I think Craigslist is crummy for recruiting. But it might be interesting in your particular case. I would look for people who are looking for an apartment. And then ask them to meet you somewhere, or ask them to spend some time on the phone with you, rather than go into Starbucks and sort of taking a crap shoot or asking people to imagine they were looking for an apartment.
And a decently close second to people who are involved in the task in real time is people who were involved in the task recently. So if somebody was looking for an apartment last month and they just got one, they're going to remember those emotional states pretty well. But since it's a task that's always going on and isn't seasonal, I'd try to get somebody who is looking for it right now.
Audience Member: So we are a pretty big commerce, e-commerce mixed company. And one of the problems that we constantly have is we want to run all these tests. And the business comes back to us and says, you guys better not get in the way of people buying stuff. And so we obviously want to do these studies, and find out how people shop, and how they move multi-channel, omni-channel.
But the business is always concerned that we're going to get in the way of people actually buying stuff. How do you address those concerns with folks?
Cyd: Right. There's a couple of ways to address that. There is the "really guys, it's not that many people." If you're going to do a qualitative study, you're going to talk to 10 or 12 people maybe. Is that going to hit the quarterly numbers? More likely not, unless it's a smaller company.
On the other hand, the fact that you are addressing issues that might be preventing people from purchasing in a longer-term sense, or structural issues that might be preventing people from purchasing things they actually want, would hopefully be something that you can sell to them. I've known a lot of sales departments where there are heavy metrics like that to be really uncomfortable with intercept recruiting.
What if somebody's coming in motivated to buy, and you stop them from it? But I don't think it's always the case that anything that gets in the way of someone buying right now will stop them from buying. If that's the case, it's a tenuous relationship.
So if you're able to create a lightweight research protocol that doesn't necessarily pull the person out of their flow but just says, hey, can I watch your flow? I don't know what you're selling on e-commerce, but if it's something where the purchase decision is bigger than a matter of a few minutes, something like cameras or computers or something like that, where there really is a two-phase shopping process where you're assembling that consideration set and then after that, you're making your decision, getting people in the early stages might be an interesting time to do research, because they're not far enough down the funnel to freak marketing out just yet. That's probably my best-- yeah.
Audience Member: I was just wondering if there any tools that will trigger the prompts using GPS, so it's context. Like, say they walk into a store, and then, hey, buy some pink pumps. Is there apps retail-wise that will push coupons to you and things.
Cyd: I don't know of any that will do that directly that are intended for research, although I bet you could hack it together if that was something you really wanted to do.
Audience Member: I'm curious about best practices for taking research data and turning it into an actionable format for design or development staff to actually do something about.
Cyd: Right. So the most actionable format is prototypes or designs. And it depends on your culture, whether it's OK to go straight there from research data. Depending on how much you collect, and how qualitative it is, and how much your stakeholders have been directly involved in the research process, you need to do different levels of formality.
So for me, I often find that one of the most effective things is if my stakeholders are directly involved in the research process. And so that means if they see me doing the recruiting, if they are part of writing the protocol, if they understand how the protocol is going to be flexible to different people's tasks, if there's any direct interaction with participants, if they can see it, and if they can also participate in real time, I find that to be one of the most powerful things for getting everybody moving straight forward is if there's a back room where they can send over questions to whoever's moderating the researcher.
In a diary study context, maybe if they can put in questions on the things that people submit, what did they mean about those signs been confusing? Why were they confusing? If that comes from your SVP and they actually get an answer, they're suddenly much, much more invested and bought into the outcome of what you do, and more likely to take it and implement it directly.
So I think you start there with involving them as much as possible in all stages of the design of the research and then the execution of the research. And then in terms of technically how you do analysis, there's a lot of choices depending on what kind of data you have. But I usually go with some form of tagging, depending on whether it's text data or photo data.
One of the things I often present, if it is a case of presentation and I have video clips or photos or quotes, anything that directly shows a human having an experience is really, really powerful. Anything that shows eight humans having the same experience-- really, really, really powerful. So taking those things and then moving those forward into a design solution.
Audience Member: Hi. I'm sure this varies depending on the project and the scenario and the quality of the people that you're testing with, but is there a safe minimum amount of people to test with, to get really accurate results? We're often on really quick timelines, so.
Cyd: Yeah. It's a great question, a classic question, I believe one that every marketing department asks every time. That, really, did you talk to enough people, but also how can you take so much time? So classically, there's a very famous Jakob Nielsen article from, like, '89, where he says five users are enough to surface 80% of usability issues.
I find five users to be a little scary for recruiting, though it will do if that's the amount of time you have. Because what if somebody is an outlier? What if somebody is just not a good research participant? Most of the time for usability-type studies, I like to recruit eight to 10 for basic stuff.
For something like a diary study, I'd probably try to get 10 to 15. The reason that that is OK is there's a big difference between researching opinion and researching behavior. So for opinion research-- and by the way, for optimization techniques like AB testing, which is really appropriate to a completely different phase of the design process, you need a lot of volume for it to work. You can get really skewed by a couple of outliers on a political poll or something like that. So you need to be in the thousands.
But with human behavior patterns and interacting with an interface, it repeats over much smaller samples. And this is also, though, where recruiting people who actually own and care about the task comes in. So people get into a kind of-- I call it puzzle-solving mode or research-participant mode if they don't care.
Like if I bring you in and I say OK, time to shop for forks. You're probably not shopping for forks right now, so you're going to be like OK, I want to do this right. I want to do a good job. I want to get done. And your behavior is going to be very different from someone who's trying to register for their wedding.
If you get 10 or 12 people who are going through wedding planning-- wouldn't that be fun diary study? Lots of emotion there. You get 10 or 12 people who are going through wedding planning, and you start to see behavioral patterns existing across that, you can feel pretty confident that those behavioral patterns are real.
It's interesting. We used to, when I was a consultant, sometimes get requests from clients to do a lot more participants than we would have recommended. And if they would pay for it, we'd say OK. Yeah, sure, we'll do 30 interviews on your e-commerce checkout. And always, by about interview number 14, there were like all right. We get it.
And they might have us finish it up in order to satisfy the people at their company who wanted the numbers. But really, by that point, things are set. You're seeing the patterns. There's a fair amount of literature on this, if you need to send it to people at your company. So you can find me or talk to me later.
Audience Member: Real quick follow up. The eight to 10 that you gave, eight to ten participants, is that the sweet spot for mobile usability studies? Or were you thinking in a broader context?
Cyd: I would say usability studies in general. I haven't seen any evidence that mobile studies are different in terms of the number of participants that you need. The confounding factor is how much you care about different OSes and different handsets. So there is some evidence that the iOS population and the Android population are somewhat different. There's also evidence that the big, fancy, shiny Android population and the small, cheap Android population are different.
So thinking about who your target audience is, what kind of handsets and operating systems they tend to use, and whether that needs to be another layer on how you segment your recruiting set. What I do in those circumstances-- if I say, this is a test for which I'd like a sample of eight to 10. But I have two segments. I really want to look at high-end Androids and iOS. I wouldn't create eight to 10 of each, necessarily, so much as I would drop it down a little bit and say, OK five to six of each.
So it makes my study a little bigger. But we're going to assume that there are some common behaviors between people who are wayfinding on transit between those two OSes. But I just want to check whether the affordances within each phone make a difference in what people do. So make it half bigger, but with two segments.
Audience Member: Hi. I'm always curious about the data visualization, because you're not going to work in a vacuum. You're going to share the data you have with stakeholders or designers. Do you go into a survey with the end in mind? Like, I'm going to present the data this way because this is the way I'm collecting it, whether that's qualitative or quantitative.
Cyd: I always find data visualization in just anything that we do in UX can be difficult, taking that data and making something that's digestible and you can assimilate it. And do you have a way of presenting data that always seems to resonate with people? If you have qualitative, if you have quantitative, is it like a stat? You mentioned like something human that's up there, like a quote? I know it's a lot of questions.
It really, really depends on the audience. So my favorite way, because this who I am, is that human piece. If I can present somebody with a video of a person having an experience, I feel like they'll connect and empathize with that. But at a lot of places, it's really important to also have the numbers.
There are a couple of apps that do this specifically. Like we talked yesterday in the workshop about Usabilla, which now works for mobile, which is an app that basically shows people screenshots and allows them to answer questions by tapping on them. And what it ends up creating, if you have 100 or 200 people run through this, is a heat map. So it's like a beautiful visualization of where people thought your navigation was. It's a great thing to present to people who want to see something visual
I will do graphs and data visualizations, but I usually try to throw a little human tweak on them with some kind of illustration or something. That is probably just a personal bias, classic researcher type.
Audience Member: Hi. The dscout app looks like a really great way to do the longitudinal studies. But I worry a little bit about the assumed rate where people might stop the study. So I was wondering if you could describe how you onboard people and describe how to use the app, whether that has to be in person, just to make sure that they're in for the long run.
Cyd: Yeah. So I don't think it has to be in person. For that transit study, I've mainly been just, because of what I'm interested in, going with pretty sophisticated users that I've recruited through my network on Twitter. That has a required a personal onboarding. I'm also just asking people to do two or three days, so it's not like a huge deal.
I think if I were asking people to do a month-long diary study, I wouldn't necessarily visit them in person, but I would arrange to have a call. And I would also arrange to have an incentive, which is, when possible-- and it's honestly rarely possible for us in civic tech. It's inappropriate for most governments to pay a citizen to do a research study or anything like that. But when possible, that has a good effect on completion rates. Especially if it's paid if you get to two weeks, I'll pay you this much. If you get to the end of the month, I'll pay you this much.
Yeah, I think I'd probably create a web page for the study. So that as I was recruiting people, just a little one-page site, explaining in detail. I'd probably test the one-page site with people who weren't part of my team, see if they could understand and see what I was looking for.
I'd probably arrange for calls. And I think that in many cases a 15-minute call would probably do the trick. Like, they get a sense of your voice. They meet you as a human. I give them my contact information, a good way to get in touch with me during the study, so that it's not just off into the wild blue yonder.
I'd make sure that I wrapped it in some extra support there and also offered an incentive. So I would expect to spend a couple of weeks recruiting and a week onboarding people before I got going with the study of that length.
Audience Member: Hi. I was wondering from the Track Your Happiness study, if you are happier on Tuesdays and Wednesdays now that you know, now that you've seen the results from it.
Cyd: That's a really good meta question. I haven't had time to react quite yet. But I was thinking, well, now I know that there are some actions I could take on Tuesdays and Wednesdays. Maybe I need a little space between those meetings. Maybe I need to make sure I don't stay at my desk for lunch.
Audience Member: And my follow up question to that is, do you share results from your studies with your participants? And does it affect the way you approach your next study?
Cyd: I rarely have. Although because we're somewhat committed to radical transparency at Code for America, that's probably coming into my future in a very big way. We do ask them about the experience of doing the study sometimes. Were you comfortable? Was this pleasant enough?
So yeah, I'm going to be doing some really big studies coming up this year about municipal websites and people who are on the so-called wrong side of the digital divide, and whether they're able to get what they need out of city digital services. And I expect we will be sharing everything publicly. So now that you ask me, I think it would be a good ethical requirement to make sure that it actually gets back to the people that participated, and they understand what their contribution went into. Good call.
Audience Member: On the topic of compensation, are there any standard guidelines? I'm sure it varies based on the length of the study, but as far as monetary value, what might be standard or expected?
Cyd: Oh, for incentives?
Audience Member: Yeah.
Cyd: I don't know that there's a standard. I know with a commercial consulting company, we used to pay a $75 Amazon gift certificate is our most common incentive for like a 45-minute traditional interview. And then that used to vary depending on what the company could afford. Sometimes a nonprofit might only want to pay $20, because they're non-profit and they're hoping that people will help out of a good motivation, as well as for the money.
Sometimes in a really difficult category of person, you might end up doing something really different. I remember recruiting physicians and having to go with a $400 charitable donation, because really, they didn't need money. No amount of money paid to them was an incentive. But if we could put them in a good light by offering a charitable donation for them, that was a better incentive. So sometimes a $10 Starbucks card is all it takes for like a quick 20-minute interview, too.
Audience Member: Very practical question here. Are you aware of any apps like Snagit for testing responsive sites or mobile sites that capture a scrolling window. I'm having trouble locating that for iPhone?
Cyd: Oh, did you say capturing the full--
Audience Member: Scrolling window for longer responsive designs.
Cyd: Yes, there is. And now I'm going to forget the name of it. It's called Barry for iPhone. It's $1.99, I think. It's called Barry, just Barry for iOS. I think it's $1 or $2. It works really well. And then Screenshot Ultimate for you Android folks will do a full-length screenshot on Android.
Audience Member: I have sort of an interesting-- I work for finance with advisers. So we have a lot of regulations around how often we can touch base with them, how we can touch base with them. And working on a financial adviser app, it gets very difficult, because it's really hard to test that.
Because in addition to all these regulations, we have a fairly small number of people we can even talk to. So does anything pop into your head of an idea that we might be able to do that would get to those people, without having to cross those governmental regulations all the time?
And I'm just asking because this is what you do. Maybe something will pop into your head. Because we've been racking our brains about it. It's really tough.
Cyd: I'm interested in it. Without knowing what the specific regulations are, if the regulations are around phone contact or something, could you use one of these diary study methods, where it's indirect contact for a span? Are there topics that are less sensitive? But it seems it's going to be hard to develop that app well without sitting down with them and doing real usability testing at some point.
If they are a fairly homogeneous group, you could go with really small sample sizes, just so that you don't run through your group of people. So maybe you have a test with four people, and then you have a test with four different people at a different stage of prototyping. I bet you've already thought of those though.
Audience Member: Similar kind of things, yeah.
On the retail side, with our company, they do-- in their nav, they have the ability just to put little survey questions in there, that you could go to the page and just give some feedback on some comps and things like that. But we don't think we're allowed to do that. So it's a pretty daunting situation. But thanks for any ideas.
Cyd: Yeah. Well, I sympathize.
Audience Member: Hi. You've demonstrated a lot of tools that are good for when your users are tech savvy. But we are deploying tablets and apps to a workforce that is not in the least bit tech savvy. They've probably never owned a tablet. And any phone they've ever owned, they're probably in that percentage of individuals that may text, and that might be the extent of it.
What sort of tools for usability that our business analysts could use to determine how to build an app for them, that would work for someone who's just not tech savvy?
Cyd: So I'd probably use what they're comfortable with. To the extent that you can push it, I'd use SMS, if that's an interface that they're comfortable with. I might take the time to teach those individuals to take photos with whatever the device is and teach them to send photos by SMS, so that you can construct surveys for them that include a photo or a screenshot.
At the same time, that might be a case where you need to do some sitting down with them and observing, because you're probably going to see things that to you, as more power user, are unexpected. And at least the first few times, you might want to see those in person.
Audience Member: OK. That's where we were going, a more of a "just watch them do their jobs" sort of thing. But I didn't know if you had any other ideas for that.
Cyd: Yeah, tablets have seemed to be more congenial to people who haven't, for example, adapted to using computers with a mouse. But it's a lot of complexity when you talk about, say, what I asked my workshop participants to do yesterday, and go out there and use three different apps to send back a message. And I think that's probably a lot to ask of a population like that.
So to the extent that you can provide them-- you can do something hybrid, too, and provide them with a paper trigger that includes graphic instructions on how to send you back the right kind of message that they're looking for. So you could copy a traditional diary study, in terms of sending something out that's on paper, that's very clear, that's in a form where they're used to, assuming they have good literacy skills on paper. And then allow that to include the detailed instructions for how to do something with the device. That might be a good design.
Audience Member: OK. Thank you.
Cyd: Just off the top of my head.
Audience Member: Hi. You mentioned that recruiting is the number one most successful factor in the study, as well as takes the most time. I was wondering if you had any methodologies that you've used to help save time on recruiting. Essentially, I've heard about using the same users again through a panel or something like that, if you have a target set of people that use your product often.
But there could be pros and cons of that. I was just wondering what methodologies you used in recruiting that would make it faster.
Cyd: Right. Yeah, I am not the biggest fan of panels, because I find that it's really important to recruit for a task match. And ideally a task match and relatively close time synchronization to when you're doing the test. So panels are not great for that, as a rule. They tend to match more on things that matter in marketing segmentation, like demographics.
So I think I've just had to get used to recruiting taking time. When doing studies on desktop websites, there's a great tool, which I think Adam mentioned I'm an adviser to, that lets you intercept people from a high-traffic website and get into a session with them immediately. So you're really not actually spending any time on recruiting. Really cool for that particular case.
But for diary studies and things like that, you are kind of in the world of having to figure out where the people you care about-- presumably your customers or prospective customers-- are hanging out, where you can find them, which is probably a good thing to know in general. So it's difficult that it takes time, if there's a pool you can go out to that are real customers, if you have a list, that's great.
But you really want not just people who are in that kind of segmentation match, but who are in a task match as much as possible. So my real answer to that while it's painful to give that recruiting time when you're dealing with fast development cycles and so forth, I've never found it not to be worth it in terms of a quality of the study. Sometimes you have to sacrifice it, just because it turns out that time is more important than the quality of the study. But it's not wasted time.
Do we have one more? We're almost out of time. All right. Thank you very much.