The SpoolCast with Jared Spool

The SpoolCast has been bringing UX learning to designers’ ears around the world since 2005. Dozens and dozens of hours of Jared Spool interviewing some of the greatest minds in design are available for you to explore.

Episode #238 Josh Seiden - Hypothesis-based Design within Lean UX

May 21, 2014  ·  15 minutes

Listen Now

Download the MP3

In traditional development environments, requirements are what you base the project’s direction on. However, requirements assume that you know what you’re doing and why you’re building it. Substituting your thinking to adopt a hypothesis approach allows you to examine where you may be wrong. Lean UX itself embraces hypotheses to quickly determine what is and isn’t true about a project and which is the right path to go down.

Show Notes

Josh Seiden co-wrote the Lean UX book with Jeff Gothelf. In his work, Josh arrives at hypotheses by assembling everything the team knows about a project. In his virtual seminar, Lean UX: Forming and Testing Hypotheses, Josh explains that by listing out all of your assumptions you can see which will have the biggest impact if you’re incorrect. This helps shape the hypothesis and the direction for the project.

The audience asked many great questions of Josh during the live seminar. He joins Adam Churchill to cover some of those questions and more in this podcast.

  • How is a user story different from a hypothesis?
  • What is the source material of hypotheses?
  • How can you integrate this process into a closed session environment?
  • Is there research done in advance of forming the hypothesis?
  • How can teams with strong differences in their viewpoints reconcile through this approach?
  • Can you test multiple hypotheses at once?
  • Can you combine both user and business outcomes in one hypothesis?

Full Transcript

Adam Churchill: Hello, everyone! Thanks for joining us for another edition of the SpoolCast. A few weeks ago, Josh Seiden presented a wonderful Virtual Seminar titled, "Lean UX: Forming and Testing Hypotheses."

Josh's presentation, along with over 160 other pieces of useful content just like it, is now part of UIE's All You Can Learn. It's a library of all things you UX. Today, he's joining us to discuss some of the great questions that came in from our audience during the webinar.

It's easy to talk about features, but features don't always translate into functional, profitable or sustainable. That's where lean UX comes in. It reframes a typical design process from one driven by deliverables to one driven by data.

Josh has been there, he's done it. In the seminar, he showed folks how to change their thinking, too.

Josh, thanks for joining us to spend a bit more time talking about this topic.
Josh Seiden: Adam, great to be here, thank you.
Adam: For those that weren't listening that day, that didn't join us for the Virtual Seminar, can you give us a bit of an overview on what you talked about?
Josh: Sure. We really focused that day on a close look at the idea of the hypothesis. It's really one of the core concepts behind the whole Lean UX approach.

The lean UX approach assumes that there are things in our project that we assume to be true, but that could be wrong. The idea, when we're working that way, is to figure out as quickly as possible what's true, what we know to be true and where we're wrong, so that we can change paths, change courses appropriately.

A hypothesis is a way of being explicit about that. It's a statement of what we believe to be true and also a statement of sort of the feedback we're looking for from user research or market research, so that we'll know if we're right or wrong.
Adam: There were, as I said, lots of great questions from our audience and I think we've chosen a few to circle back on. There's a lot of talk about user stories and what some design teams call epics. How are those things different from a hypothesis?
Josh: Sure. The lean UX approach is built on top of an agile approach and in an agile software engineering project, the sort of unit of work that we manage is called a user story. It's a simple description of a piece of functionality that the system will perform.

A user story is really what, in more traditional software project, can be thought of as a requirement. In an engineering project, we might say, "The requirement is that the nut and bolt assembly be put together and torqued to a certain value." That's a requirement. It assumes that you know what you're doing.

A hypothesis is related in the sense that a hypothesis is a way of organizing our work. We can say, "Well, in the same way that we have six user stories that we need to work on this week," we might say, "Here's a list of hypotheses that we want to work on in the course of this project. We're going to move through that list and test and see if we're right or wrong."

The big difference is that a user story or a requirement, the starting point is that we know what we're doing and we know that the thing that we're building is correct. Whereas, a hypothesis is really a question -- a backlog or a list of hypotheses is really a question list.
Adam: Where do design teams get them? What are the sources of the material that you use to come up with the testing hypothesis?
Josh: That's a great question. It's really a pretty straight line. We start looking for hypotheses by assembling what we know about the project. We like to take a 360 degree look at the project, so what do we know about our users? What do we know about what they're trying to accomplish? What do we know about the market? What do we know about our business and what are our businesses trying to accomplish?

We list all those things out. You sort of start by saying, "This is a list of assumptions. We believe these things to be true. But we may not know that they're true." Those assumptions are sort of the source of the hypotheses. You look at your assumptions and you say, "OK, which ones of these are risky? Which assumptions here, if we're wrong, are we going to be in big trouble?" Those are the ones that you sort of transform into hypotheses.
Adam: We had a question come in from one of our attendees, Ally. Ally explained that, in their company, the planning for features in the project work is done at the C-level and unfortunately, it sounds like it happens in a closed session environment. What are your suggestions for integrating this process into one like that?
Josh: Yeah, so I think there are a couple of issues here. The first is, if it's really a closed session, and you don't have a seat at the table, then there's almost no technique you can use that will work. Right? The first is, how to participate in that conversation.

I actually think hypotheses -- they're a pretty good tool to sort of help you open the door. Because this sort of basic structure of the hypothesis is built around the outcome that the business is trying to achieve. You might say with a hypothesis, we believe that if we put this feature in the product, it will help the user do this thing and that will help us achieve the following business outcome.

The hypothesis, that structure forms a kind of a traceable line between the feature and the business outcome. Having that kind of traceable line, it gives you a way to talk to project stakeholders.

You can start, if you're in a position, to have that conversation. In other words, if you can get the door open, you can have a conversation that says, "OK, I see this feature list, let's just track that back to what business outcomes we're trying to achieve, and furthermore, how we'll know that we're achieving those business outcomes, so we can start to think about metrics and start to think about how these features sort of earn their keep."
Adam: The folks at Rockwell were wondering if there's research that's done leading up to this step in the process, or is the hypothesis sort of put in place to test internal assumptions and sort of skip some of that?
Josh: What's really interesting, I think, is that the hypothesis is useful no matter how much research you've done. If you're an advocate of user research, and I certainly am, you know that you've never learned enough about your user. There are always open questions.

The hypothesis can be used at that moment, when you're in the middle of a project, for example, to express what you need to learn next. But the cool thing is you can also use them at the beginning of the project when all you've got are wild-ass guesses.

It can be used at every stage in the life cycle, too, because what you're doing is, you're starting by saying, we believe this thing to be true, and here's how we're going to test it to find out if we're right. You can make a statement like that when you've done very little research, when you've done no research at all, or even after you've done a lot of research.
Adam: The Frankly Chat team, they posed this situation where you may have a number of stakeholders and their views are very different. Ultimately, those varying views end up with messy and inconclusive data to make your decisions on. Their question is how do teams with strong differences at that level sort of reconcile their views through this approach?
Josh: Yeah, one of the things that I learned in the course of my career is that, when you get these kinds of polarized sort of opinion wars on a team or within a project, it generally means that you don't have enough information in the room to make a good decision. If you had more data, the right answer would appear. That's not always true, but it's a pretty good guideline.

If there's a difference of opinion -- a strong difference of opinion, even -- but the team is willing to sort of acknowledge that there's not enough data to make a confident decision, then you can use this structure as a basis for figuring out what you need to learn in order to decide.

You might frame up each argument as a hypothesis. We believe a blue button will solve this problem and we'll know we're right when we see some percentage of clicks on the blue button. Versus, we believe the red button is right, and we'll know we're right when we see X.

You can frame those up and you can run tests, and collect the data that you need. I should say that, sometimes the data is, as I implied in this example, sometimes the data is classic A/B testing, it's really hard numerical data. Sometimes the data you collect is messier. Sometimes you can use this hypothesis structure with qualitative research, as well. We think the user has a need to do X, and we'll know it's true when we interview a dozen users and we hear some percentage of them express that.

That data is inherently messier and open to interpretation. This isn't science, but this is a structure for pointing teams in the right direction so that they get enough data so that they can make decisions that they feel more confident with. But again, it's not science and the team has to agree that sometimes the data is going to be inconclusive and you are just going to have to make a decision.
Adam: Josh, what do you think about situations where a design team is working with multiple features or hypotheses on a single page? Do you test them individually or is there a way to do them all at once?
Josh: This is one of the things that I really like about the hypothesis method, is that it has this notion of outcome built into it. In other words, let's say you're testing a product page. Ultimately, you probably have some metrics for what that product page needs to do to be a success. There are some high level outcomes you want from people who see that page. You want them to add the product to the cart and then checkout.

By evaluating the outcomes, you sort of step back from evaluating the independent features. It's not about whether the individual feature justifies its existence. It's more about, if I've got a dozen features on a page, does that make the page better? Or does, just by virtue of there being a dozen features on this page, does it make it worse? What if I add two more features? Does that improve the page or make the whole page worse?

It forces the team to have a conversation about the outcomes they're trying to achieve and it moves the discussion away from the individual feature polishing that we sometimes get into, especially when we're engaged in a project for a long period of time.
Adam: Dennis asks this question about outcomes, which is something you spoke about in the Virtual Seminar. He wants to know if there's a way to combine user outcomes and business outcomes in the same hypothesis.
Josh: Absolutely, and I think that's another thing that is really appealing to me about this whole structure, is it's a way of creating this traceable line between the feature, what the user is trying to achieve, and what the business is trying to achieve.

We might say, by adding this feature to the product page, let's say by adding a review feature to the product page, the user is able to see what his or her peers think of this product and thus the user is more likely to add the product to the cart and checkout.

In that statement, you have the feature, you have the statement of what the user is trying to achieve, right, this feature is the review and the user is trying to get social proof. All of that is tied to the business outcome, which is the user's more likely to put this product in his or her cart.

For me, having all three of those pieces connected, it's really critical. Otherwise, why build a feature?
Adam: Very good. Well, Josh, thanks for offering to spend a bit more time with us.
Josh: Oh, it's a pleasure to be here.
Adam: To our audience, thanks for listening in and for your support of the UIE Virtual Seminar program. Remember that you can get at Josh's seminar and lots more over at All You Can Learn.

Goodbye for now.