The SpoolCast with Jared Spool

The SpoolCast has been bringing UX learning to designers’ ears around the world since 2005. Dozens and dozens of hours of Jared Spool interviewing some of the greatest minds in design are available for you to explore.

Episode #210 Jeff Gothelf - Lean UX: Escaping Product Requirement Hell

June 7, 2013  ·  33 minutes

Listen Now

Download the MP3

Assumptions tend to be the downfall of many research projects. Making design decisions based on generalizations of what people are likely to do leads to surprises once you finally get your product in front of actual users. The result? Rework and frustration due to an overall lack of communication within the team.

Show Notes

Jeff Gothelf suggests starting with an attitude that you’re testing a hypothesis which leads to a more open discussion. The main thing is, hypotheses, just like design, can change. Being flexible and iterative in your design process encourages an environment of collaboration.

Working this way allows you to work free of heavy design documentation as well as collaboratively in real time. Jeff finds that simply changing your thinking about your product from “we know” to “we believe” is the catalyst to a more productive workflow.

Full Transcript

Jared Spool: Hello, everyone, welcome to another episode of the SpoolCast. I have with me today the wonderful Jeff Gothelf, who is, once again, gracing our presence with a full day workshop at the User Interface 18th Conference. This time, he's doing a workshop on escaping product requirement, how using Lean UX.

Jeff is the author of a fabulous book, cleverly titled, "Lean UX," which he co-wrote with Josh Seiden. It's a great book, and today, guess what we're going to talk about? We're going to talk about Lean UX stuff.

Hey, Jeff.
Jeff Gothelf: Hey, Jared, how are you?
Jared: Doing fine. I thought that we could talk about some things. When you and I were talking about your workshop, we got onto the subject of sort of testing out hypotheses. I'm really interested in this idea of testing hypotheses, because I grew up in this world where, when we tested a design, it was to see how usable it was. It was to see how easy it was to use, to find places where it wasn't easy to use, to iron those kinks out of things.

But as I've been reading through the Lean UX stuff and the Lean startup stuff, there's a lot of this discussion about taking a hypothesis and testing it. I'm really curious how that's different from what we've traditionally done in terms of putting users in front of a design and seeing if they can use it.
Jeff: Well, the first thing that testing hypotheses does is it simply changes the conversation. The moment you start to use the syntax of, this is a hypothesis.

The syntax that we use in the book and the syntax that we teach in workshops is, instead of we know, it's we believe. As soon as you change the conversation to, well, we believe this to be the case. We have, with an educated guess, we have historical data that perhaps points to this. But until you actually put something in front of customers and have them interact with it to some extent, you don't really know.

The conversation becomes a much more collaborative and, I think, productive conversation that says, OK, we've got this idea and the idea is that we should build this feature, or that we should change our product strategy. Or that we should make the button red or whatever that idea is, depending on, there's obviously different levels of granularity. But we believe that this is the way to go.

The next thing you need to do is assess the risk of that assumption. You've just made an assumption. The assumption is that this feature or this product will cause some change in your customer's behavior. you've declared those assumptions. Depending on the risk of those assumptions, we would recommend, if there's any kind of significant risk there, that you test to those assumptions.

If the risk for you to run an A/B test, changing the button color from red to blue, to see which one gets clicked more, is relatively low, then that's fine. You can run that test. You can make that decision and see what kind of change happens. But if there's a fundamental risk here, whether it's to your existing user experience, your product, or whether it's a costly feature to implement, there needs to be an opportunity given to the team to prove out whether it's worth investing that time and money in that feature.

That's where this idea of a hypothesis comes in. You've written this hypothesis based on your assumptions. You've made an assumption that a particular customer type exists and you've made an assumption that that customer type has a particular pain point or a series of pain points.

You've made an assumption that your produce offering can solve those pain points. You combine all those assumptions into a hypothesis that says, "We believe that building this feature for these people will achieve some kind of outcome."

Now your job is then to test that hypothesis as quickly as possible and in as lightweight a fashion as possible. The difference here is that you're not testing usability. You're testing value. You're testing the viability of an idea or the value of a proposed feature. To see if it's worth investing in. You'll get to usability testing should you choose to invest further in this feature. You'll get to usability testing. But do you even want to design something that people don't want or need, or spend any time building it?
Jared: I'm thinking about a website that they've decided to undergo a major change in their pricing model. They used to be $25 a year and now they're going to $49 a year. They've got different features that they're offering for that.

Someone in the business department thinks that this is going to reach a wider audience, make people happier, give more value. Walk me through. The hypothesis is that this is an improved model. Is that right? Or do we have to refine it a little further?
Jeff: This is a business assumption. There's no product assumption to test there.
Jared: There's feature packaging that's going to go with this. The idea is... Let's say the $25 a month thing was a limited package, but now we're going to unlimited storage and ad-free content and being able to share with your friends in a new way. This is what the business dude thought up. Do we break that into several different hypotheses? Is that what we do?
Jeff: You can. Again, it really depends on the amount of risk. If all these things already exist and you offer them in one capacity or another, it's simply a pricing test, then running a pricing test is a relatively simply process. That doesn't require that the hypothesis is simple. Will people pay $49 for this thing that we used to charge 25 for? You set up a landing page that offers the new pricing scheme. You drive traffic to it with some Google AdWords or however you're driving traffic for your target audience.

And you just see if people will start paying that kind of money for it. That is a hypothesis, but it's a relatively easy hypothesis to test. It gets more interesting when the business dude says, "We're going to charge $49 for a more complete package, but we'd like to include these two new features in that package. We'd like to upgrade the functionality of the product in this way for that package." Now the question becomes, is there a significant investment in making the product do these things?

And is there a need in the marketplace for these additional things for the product? The question then becomes not only, "Will people pay for it?" But, "Is it worth our while to even build this? Will people use it? Is this something that will make them more satisfied customers? Other than just making us more money, will this make them more satisfied customers?" That's the question that we need to ask.

If one of the features that was suggested is a high risk feature, than the hypothesis becomes, "We believe that building this high risk feature for this portion of our target audience will alleviate this particular need for them and, in turn, will drive greater subscriptions of the premium product." So we've got a clear definition of success there as well.
Jared: What makes something a high-risk feature? Is it just the complexity to implement? Or are there other things to?
Jeff: There's a variety of choices. Ultimately, your company needs to decide how it quantifies risk. But yes, technical complexity, for example, is high risk. It may be simply something that takes a lot of resources to implement. It may not be technically complex, but it's just going to take a lot of time to build these features. So there's financial risk to your company. Maybe you are taking on a new target audience.

You're opening an exclusive product to a broader audience. There's a risk of either erosion of your existing customer base or a cannibalization of that base. Maybe you're stepping into a new market that you really don't know that much about. You've been working in one vertical for a while and you'd like to move over to a new vertical. These are all things that, as a team and as a company, you need to decide how you're going to prioritize your hypotheses.

Ultimately, you can't test them all at once. You don't want to test them all at once. And so you need to be able to figure out, "How are we going to prioritize a backlog of hypotheses so that we have a clear plan as to which ones we're going to test first and then second and then third?" That's really a team level or a company level decision about what are the riskiest things that you'd like to go after.
Jared: Let's say I'm a dude who's been doing user research now for a while. Standard user research stuff. I conduct usability tests and give feedback to the team on what features to the change. Maybe I've done a good job of convincing folks to go out in the field with me.

We go out, meet our customers and being to see who they are. What skills do I have to pick up now, to move into doing more of this hypothesis testing? Or is it basically just the same thing? We're just re-framing the way we look at the problem?
Jeff: The skill sets that researchers bring to teams that are working in a more iterative, rapid fashion or delivering in a more agile fashion are invaluable. The thing that we're testing and the formality of the testing changes a bit. As well as who's doing the research. So the skill sets that are brought in are the same. You're utilizing those same techniques that you've used to extract information out of customers for years.

It's just the creativity comes in now, as to what we're going to test and why we're testing it. The question is, we're not going to test pixel-level detail right off the bat. Maybe that's not where we want to spend our time. Maybe where we want to spend our time is with a paper prototype or an approximation of a work flow through a couple of screens that we hacked together in a keynote flow.

Anything to just get the conversation started with customers. Maybe the smallest thing that you can test is not even digital at all. Maybe it's simply getting folks to read a paragraph and react to it. Maybe that's it. So it's understanding how to apply the skills you've been using for years, in a context that requires you to share those skills with your teammates. Because we want to empower everybody on your team to be able to talk to customers and get those questions asked appropriately.

And then also to help the team understand how to structure conversations with customers when they're looking at these less than polished experiences, or approximations of an experience.
Jared: Part of this means that we have to think about where we're coming from, in terms of our process. But some of it also has to do with the assumptions that we bring to the table. When I talk to teams, there are people on the teams who have all these different assumptions. "Our users can't handle this sort of thing. Our users really want this type of functionality. Our market, traditionally, hasn't responded well to things that do X."

I often wonder where a lot of these assumptions come from. We were working with a company that produces software for human resources departments. They were pretty much convinced that the human resources people knew exactly what they needed the product to do. So it was just a matter of going in and having enough options in their product to do everything a human resources department could ask for.

When we went out and actually met with the human resources folks, one of the things we quickly learned is, they aren't very good. A lot of them don't know how to do the things they need to do. They were looking for the software to give them best practice suggestions. They took the defaults in the software that they had and assumed it was the best practice, when in fact it probably wasn't. To me, it seems like there's a big problem that teams have with the assumptions that they are basing everything on.

They build assumptions on top of assumptions on top of assumptions. Have you seen that problem?
Jeff: Certainly. Every time that we engage with a new team, the first thing we do is have them declare their assumptions. The first assumptions we have them declare is who they believe they are building products for. Personas. Not a big secret. We have them talk about who they believe their building products for. It amazes me. The same way the anecdote that you shared... I kicked off a project with a client about a month ago.

We were there. We were on-site, kicking off with our inception workshop. One of the first exercises we do is get everybody to tell us who they think we're building products for. When I spoke with the client in advance, I said, "Yeah, we've got personas defined. We've got the catchy little names for them and the pictures. We've got the PowerPoint presentation with them. They were defined five years ago. So this section shouldn't take but 15 or 20 minutes to review the personas."

We sit down and spent over two hours bringing them to consensus, as a team, around who they believe they're building products for. We see this all the time. The most critical mistake that teams make is assuming they're on the same page with their colleagues. The work that we do, when we start building products with teams and when we're kicking things off is just to get those ideas out of their heads.

What does the engineer think? What does the product owner think? What does the stakeholder think? What does the designer think? It's amazing how different the worldview is. For people who've been working next to each other for years, in the same domain. And then bringing them to a consensus. Again, the consensus that we bring them to, they don't have to necessarily agree 100 percent that these are the three or four personas that we absolutely are building product for.

They just have to agree these are the three or four we believe we're targeting. Then we go out and test our hypotheses. One of the things that we're testing is the personas, the validity of the personas. Does this person exist? We just made Jared the Jewel Collector. We assumed that Jared has these characteristics that drive the way he would use our service, that he has these needs and that our solution maybe has value for Jared.

One of the first things we do is go look for Jared the Jewel Collector. Does he exist? OK, he exists. Terrific. We found 10 or 15 people that embody this archetype. That's terrific. Does this person have these pain points, these needs? Yes. Yes, they do. Or no, they don't. And if they don't, what needs do they have and can we adjust our thinking to accommodate that? It's one of the first things. One of the anecdotes I tell all the time when I'm in front of teams is a story about a team we were working with in New York, who was building... It was a food targeted at locavores, people looking for locally grown, organic food.

They were targeting women in their late twenties who like to cook, single women in their late twenties who like to cook. They made a proto-persona, one of the lightweight ones we talk about in the book, called Susan. Coincidentally, we happened to be up the street from Union Square, in New York, where there happens to be a farmers' market. We sent a team out into the farmers' market, in search of Susan.

They went out and talked to all the single women in their late twenties that they could find. They couldn't find a single one that liked to cook. All these women that they found liked to buy pre-packaged, prepared foods they could heat up or warm up very quickly, because they were just too busy to spend time cooking. What they did find were a bunch of men in their thirties who loved to cook and were always looking for these unique recipes and these unique local ingredients.

So they went back, after spending a half day creating this persona and validating whether it exists or not. They adjusted Susan. Susan became Timothy, over the course of an hour. All of a sudden, they had a slightly more realistic view of who they were targeting. They set up a series of assumptions, then went out and knocked them down. Then realized where they had to make some tweaks and what they wanted to adjust in their personas.

That helped them focus their product further. One of the first things we tell teams to do is to go out there and just make sure that these customers actually exist. Some customers and some companies are very well researched in this. They know these things. That's terrific. Let's get that information out. Let's make sure everybody else on the team knows this as well. Then let's build those into our validation processes.

So that we can recruit when we actually go out to find customers, against these targets. If they're current and well-researched they should continue to prove themselves out. If they're not, then at least you can get a more current view of who your audience is, your customer base is and how they're dealing in the vertical that you work in.
Jared: I'm interested when you say that a lot of companies are well-researched. In my experience, research comes in lots of different flavors. You've got market research, where people are focused on who they're going to sell to.

It turns out that market research and they type of research we need to build products often doesn't break down the same way. So the marketing segments that folks come up with, to figure out how they're going to position the product and what the messaging they're going to use is, doesn't necessarily translate well, into the design decisions that we have to make.

So we need behavioral understandings of our folks. For example, Susan and Timothy, I'm curious, are there behavioral differences between what they imagine that twenty-something woman to be, versus the thirty-something guy? Maybe this is not what you're trying to do, but how do you make sure that you're testing the behavioral understanding of who the user is, in addition to this demographic psychographic approach to it?
Jeff: There were a lot of questions in there.
Jared: Sorry about that.
Jeff: I'll try to go through them, back to front. I'm not convinced that many companies are well-researched. I do run across that company that says, "We have personas. We know who our customers are." So what we ask them to do is, we ask them to bring that data into the conversation, into the inception conversations, into the hypothesis writing sessions, into the assumptions, declarations.

The amazing thing is, again, they may be well-researched, they may be documented, but no one has ever thought to question them before. So all of a sudden, the persona conversation is open for debate, which it never has been before. That's an interesting one. Yes, I completely agree that who marketers want to sell to, versus who actually uses the product are often two different people. So for that reason, if there is a marketing person or a marketing representation that can be included on the project, at least in some capacity, I think it's extremely valuable.

Because one of the techniques that we promote in the book, we call it collaborative discovery. Essentially, all it is is letting the entire project team do the research together. So engineers, designers, product managers, marketers, researchers, content strategists, QA, whoever's on the team, goes out into the field, in pairs, and talks to customers. And meets them. Either on-site or in the company, bringing them into your lab or in the context of where the work.

But the idea is to expose everyone on the team to the customer and the abrupt dose of reality that gives to people who have never been out there. I'd be surprised if most of the marketers in your organization have talked to customers on a regular basis. In many organizations, engineers never get to talk to customers. Frankly, in some organizations, designers never get to talk to customers. So the opportunity to get out there and start having those conversations starts to bring a bit of reality and ultimately humility into the persona conversation.

That's something that's been very helpful for the teams that we've been working with, to get them out into the field, talking to customers on a regular basis. And again, tying back to what I said earlier, this is really where your user researchers play a critical role. "Here's how you talk to customers. Here's how you structure an interview. Here's how you ask a question." You see what I'm saying? You apply all the things that you know so well, as a researcher and start to train the team up on those skills, so that the insight they bring back is useful and usable.

Regarding the fundamental product changes that you make, as you go out and meet these people, is there a fundamental change in the way that a woman in her late twenties would use a particular application, versus a man in his thirties? The answer is, we don't know. That's an assumption that we're making. To your point, the things that we care about in those target personas are the psychographic and demographic components that affect the way they would use our product or service.

If they have four kids and that has no bearing on my application or feature or whatever it is, I don't care about that. If they're male or female and that doesn't change the way they would use this particular service, ultimately that doesn't matter. We're looking for the psychographic and the demographic attributes that would affect the way they shop. For example, for this locavore app that we mentioned earlier, if they are into fitness and don't eat fast food, that's relevant.

If they drive a Volvo, I don't think that's relevant. We want to focus the personas on the characteristics that we believe will actually affect behavior in the application. And then go validate that these components exist in people.
Jared: Though, when I work with teams, the rule of thumb that I like to use is, every time you write down something about your persona, you need to tell me what that affects in the design, what decision that has a bearing on. I used to call this the dog and hummer rule. Because inevitably, in the persona, it tells you what pets they own and what car they drive. Like you said, most of the time, that has no bearing on it.

But then I was working with HGTV, the folks who do Home and Garden Television. They were working on a search engine for projects, home improvement projects. They had dogs and hummers in their persona descriptions. I thought, "Guys, cut this out." But it turned out it made a difference. Where it had a bearing was having a capability to specify that you needed a project that was pet-friendly.

So that if the dog ran through the construction of the bathroom refurb, that it wasn't going to somehow get into something it shouldn't be into that could ultimately hurt the pet. Or the car they drove made a difference if they're going to go to the lumber yard and bring home their own supplies. Suddenly, those things had bearing. But you have to really be able to push and say, "OK. Are we going to have a feature that says, 'You tell me what car you have and we'll tell you how you get the supplies home'."
Jeff: That's the thing. We always push back, as well. When they say, "They're married and they have two children." It's like, "Let's talk about the things that... If they're a professional person and we're dealing with a professional capacity, was aspects of their professional life affects this?" It's an interesting conversation. To your point, we try to lead with personas. Because everything else flows from that.

It's the measures of success flow from that, the outcomes that we push teams to work towards. And then also the feature sets, to your point. The design, what are the things we're actually going to build for this persona that will be valuable to them and help us achieve our goals, as a business?
Jared: I've been seeing a lot of this noise lately, about, "We've tried personas and they don't work. We've decided they're not useful. They're a waste of time." I'm with you. I think personas and scenarios are the way to drive everything in the project. Have you run into this attitude that personas don't work out?
Jeff: I've seen it a few times. I've had a few teams ask me, "Why don't we just start with tasks? Why don't we just start with the outcomes?" The real power, for me, is getting these teams to think about who they're building products for. Because I don't believe they've actually done it.

At the very least, even if they throw them away when I walk out of the room, they work well as an alignment tool. You can take the team and say, "If we're all just talking about feature sets, we're just shooting in every direction in the dark. Let's at least align around a particular direction and a set of people whose behavior we'd like to affect."

And try to get the team to align around that. But yes, I've seen that. I've had questions come up about, "Why don't you just start with a task? What is the task we want the user to complete?" The question I always ask back is, "Who is the user? What is their motivation? Why are they even using this service?" Again, I'm not advocating for six months and $50,000 worth of persona research that ends up in a PowerPoint deck that everybody reads once and then files away.
Jared: That's the problem. When people think personas, they think that project, the persona project. We got ourselves into trouble by saying personas were a project in and of themselves.
Jeff: Research, in general, any kind of research, should be integrated into the project. Tomer Sharon, who wrote It's Our Research, I love what he says. He says, "If people are talking about research as a project, you're doing it wrong." The research process needs to be integrated into the product design and development process and should be an ongoing activity. It's not a one-time thing we did for six months before we launched and then for a month after we launched.

It should be just a part of the conversation, from the beginning. Personas are no different. This is saying, "Let's not spend six months waiting for research to come back on whether or not these people exist. Let's declare our assumptions based on what we know. We won't get it 100 percent right. We won't get it 100 percent wrong." Then as we go out and build these continuous cycles of feedback from the market, grounding our ideas in the realities of the market by going out there and talking to people weekly, we start to figure out a lot of things.

Not the least of which is, do these people exist and do they have the needs that we are building features for?
Jared: That makes perfect sense to me. We should tell everybody about Lean Day West.
Jeff: I would love to tell them about it. Lean Day West is coming up in September. September 15th through the 17th, this year, in Portland, Oregon. We had such a huge success with Lean Day UX, on March 1st in New York that we decided to do it again. This time on the west coast. We got a lot of requests from folks on the west coast that said, "When are you bringing this to the west coast?" The answer is Lean Day West, September 15th through the 17th, in Portland, Oregon.

This time it's actually a total of three days, although there's only one activity on the Sunday, on the 15th. There's a pre-conference workshop, which is terrific, gets you into the mood, understanding the concepts that the folks will be talking about. Then we've got a day of workshops from people like Bill Scott, whom is, in my opinion, a legend and a pretty amazing designer, technologist. He runs the front-end development team at PayPal these days.

We've got Farah Bostick talking about how to do research in a lean way. We've got Andrew Crow and Dan Harrelson, from GE's design center, talking about design systems. We've got Lionel Morry, from Intuit, talking to applying design thinking to your processes. Intuit's a real pioneer in these.

Then all those guys, plus a few more folks, are giving talks on the second day. So one day of workshops, one day of talks, and the Sunday before, if you get in early, we've got a pre-conference workshop with Jess Stearn about using improvisation as part of your product definition process.

It's really interesting. So LeanDayWest.com is the website where you can get your tickets and read all about the lineups and the activities that we've got planned. We've got tickets on sale right now, at the early registration price. That price is good until the 11th of July. After the 11th of July, prices jump up a couple of hundred bucks. If you want to get in, get in now at the early registration rates. Again, those are good until July 11th.
Jared: And you're going to be speaking at the User Interface 18 Conference. We have you doing a full day workshop called Escaping Product Requirement Hell Using Lean UX. It's going to be a lot of fun. You've made this really interactive, right?
Jeff: It's been a really terrific couple of years. I've been teaching workshops all around the world to a variety of different audiences. I've learned what makes them fun and interesting. It's a workshop so we do work. It's not me talking for eight hours, at you. It's a little bit of lecture. It's a lot of hands-on, team-based activities. We have a lot of fun. We do some things that maybe you've never done before. Maybe you have done them before.

We get to learn from each other in a highly collaborative way. I sincerely hope all of you join me. There.
Jared: It's going to be great. Love to have you there. And your book, Lean UX, that you co-wrote with Josh Sidon, that's doing really well. People can get that off Amazon or wherever great books are found. I guess is how the saying goes, right?
Jeff: And O'Reilly.com has it. LeanUX.com has links to everything too as well, but it's been doing really well. A lot of really great reviews. I will ask that if you have read the book or if you get the book, it would be really great if you could write a review on Amazon. We've got about 30 reviews up there now and we'd love to get your opinion. It's how we get better is to get feedback from the people who've read the book. We'd love to hear from you.
Jared: There you go. Jeff, thanks for taking the time to talk with me today. This is really awesome.
Jeff: My pleasure, Jared. It's always fun chatting with you.
Jared: I want to thank our audience for listening and once again for encouraging our behavior. We'll talk to you again next time. Take care.