The SpoolCast with Jared Spool

The SpoolCast has been bringing UX learning to designers’ ears around the world since 2005. Dozens and dozens of hours of Jared Spool interviewing some of the greatest minds in design are available for you to explore.

Episode #196 Jared Spool - Build a Winning UX Strategy from the Kano Model

January 16, 2013  ·  30 minutes

Listen Now

Download the MP3

The ultimate goal for user experience is that users enjoy using your product or service. Many companies use satisfaction as a metric for measuring their success. But satisfaction is really just the lack of frustration. You should be focused on what you can do to delight your users.

Show Notes

The Kano Model helps you gauge your users’ expectations. When you approach delight from a perspective of pleasure, flow, and meaning, you can then determine which features meet these objectives.

The audience asked a bunch of great questions during the live seminar. In this podcast, Jared joins Adam Churchill to revisit some of those questions, as well as tackle ones we weren’t able to get to.

  • Is consistency in design a bad thing?
  • What research methods are there to unearth customer expectations?
  • How can you meet customer expectations on a new product?
  • Do you plot different audience segments in the same model?
  • What if the people selecting the product are not the users?

Full Transcript

Adam Churchill: Welcome, everyone, to another edition of the SpoolCast. Earlier this fall, our own Jared Spool presented a fantastic virtual seminar. It was called "Building a Winning UX Strategy from the Kano Model." This seminar, along with over 100 others that teach you the tools and techniques you need to create great design, is now part of the UIE User Experience Training Library.

In this seminar, Jared tells us about a tool teams use to make design decisions. The Kano model focuses on users' basic expectations first. It predicts the investment a team needs to make to elicit delight from users. In this seminar, Jared talks about how your competitors, existing design debt, and the evolution of ideas, from innovation to market maturity, all affect how you need to design today.

Hey, Jared. Thanks for taking some more time to talk about this useful design tool.
Jared Spool: Well, thank you. I'm happy to be here.
Adam: For those that weren't with us that day, can you give us an overview of the seminar?
Jared: Yeah. I talked about this simple little model that was invented by a Japanese dude, a guy named Noriaki Kano, that helps us predict what users can expect in terms of satisfaction based on the investment that the team makes. It basically helps map the types of investment that you get, and it helps us understand whether the design is going to come out delighting users or it's going to come out frustrating users, and when it comes out frustrating users, where we might have missed in our investments.

It turns out that there's three curves in this model that make the most difference. The first one is what was called the performance payoff. The performance payoff is a straight feature-building model where you just keep adding features, and the more features you add, the more payoff you get.

The second curve we call basic expectations, and that curve talks about how, as you increase the amount of investment to meet basic expectations, these are things that the users just naturally expect, they don't get very excited about. They're features that every product in the genre has to have. The amount of investment that you get as a result of basic expectations is going to be limited in terms of the satisfaction it will generate, because you can't get above neutral satisfaction by meeting a basic expectation. All you can do is frustrate.

And then, finally, we talked about excitement generators. Excitement generators are things that often take a little bit of investment but, in fact, are the delighters that we hunt for, and how the more you invest in these things, the more likely you'll delight folks, but that those delightfulness things are often short-lived, because once your competitors do it and everybody else in the market does it, they shift their way down to basic expectations, and suddenly you're back in the game where you have to hunt for delighters and you're not making people happy with just the basic things again.

It turns out that these three curves help us understand why users get frustrated when they do and why they get delighted when they do and what we have to do to make things right. That was the shortest version of that presentation I've ever given.
Adam: Very good. Well, as always, we had some fantastic questions from our audience, and as is often the case, we didn't get to all of them, so let's tackle some of them. One of the things that came up in a bunch of questions, and there was even some reference to it on the Twitter stream. You talked about consistency and made it sound, in a lot of cases, like a bad thing. Can you talk a bit about that?
Jared: Yeah. It's not a bad thing. Being consistent's not a bad thing. Focusing on consistency from a design perspective is not a good thing. The difference is, it's how you approach the problem, right? I don't think anybody argues that if you use a term in one part of the interface, you might want to use that same term in another part of the interface. That is what we often think about when we think about things like consistency.

But once you start to go down the road and say, "Well, things need to be consistent with everything else," it becomes really problematic, because you're constantly designing to draw from things that you don't necessarily know that the user is actually paying attention to and users don't care if things are consistent with things they're not paying attention to. They only care when it's consistent in a way that they expect. The key word there is actually "expect," not "consistent." The trick is not that you want to make things consistent. The trick is that you want to meet user expectations.

Let's take the idea of a term, right? On many Windows interfaces, if we were to have a function that would let me send to the printer the document that I'm working on, Adam, where would you expect that function to be located?
Adam: In the File menu?
Jared: In the File menu. OK. What would you expect that to be called?
Adam: "Print."
Jared: "Print." OK. If we called it something else or we put it someplace else, we're going to piss a lot of users off, right? But that's because that's what users expect, and they expect that because people who use Windows have used a lot of Windows applications and every other Windows application puts it there. But if you think about it, under File, Print makes no sense whatsoever.

Print makes even less sense if the application is, let's say, some sort of database app, where the screen I have currently up is a data record--not the whole file, but just one record within the file--and I hit Print, and all I want to do is print that one record. Let's say it's a contact record, so I want to print the name and the phone number and the person's email address, right?

Why is it under File? If all I want to do is print a single contact in my contact database, wouldn't it be better to be under Contact? But everybody expects it under File, so we should probably put it under File. But the only reason they expect it under File is because of years of training that Microsoft has done and all the other Microsoft apps, that Print is always under File.

This is going for expectations. If you say, "Well, we're just going to make things consistent, so we're always going to put things where they are in other things, or we're always going to label them the same way as they're labeled other places," but we don't pay attention to what users do and don't expect, things will start to break.

The trick is to follow the expectations, to go after what users are expecting, to say, "OK, what I want to do is really understand who my users are and what their expectations are. Do they expect the print command to be under File, or do they expect a big button on the screen to say 'Print,' or do they expect it to say something else?"

For things that are not frequent--print is a frequent function--but say we want to email this record so that it shows up in your version of your contact database. Do we call it "synchronize"? Do we call it "email"? Do we call it "make synchronization record"? Now we have to figure out, "What do users think? How do they think about this function? How do they approach it?" and talk to them about what they're actually doing, and then give it a name that matches their expectations, which may or may not be the same name we would've given it someplace else. That's the trick.

Expectations is very user-focused. It's focused on what the users think, whereas consistency is very system-focused. It's focused on the system we're building. When we're always given a choice between doing something that's user-focused versus system-focused, most of the time we want to opt for user-focused, because that will create the best experiences. That's why I say that designing for consistency is not the right thing, but designing for expectations is the right thing.
Adam: Let's stay on this theme of customer expectations. The team at Work at Play wants to know about some more research methods that might be available and particularly good for unearthing customer expectations.
Jared: All of the standard research practices work here. I mentioned in the seminar that field studies, going out into the field, are a great way to look for basic expectations in particular. They teach you what users are currently doing and how they do them, so you can then see, "Well, OK, these people are heavy Excel users, so they're going to expect things to work like Excel when we're dealing with Excel-like things." We can start to look at those mappings and those things and work from there.

We use field studies to do that. We also use the field studies to identify the excitement generators, because, oftentimes, when we're in the field, we can see what we call tool time. Tool time is time that the user spends working on something that doesn't really give them any significant improvement in the outcome or the quality, but just ends up taking time.

For example, imagine an application that a system administrator uses where they have to set up a new record for a new user in whatever the application is. They have to create the user's record, and then they have to go and put in various preferences. They have to put in their access rights. They have to change their permissions. They have to do these things.

If you put all those things on separate screens and you make the system admin jump from screen to screen to screen, identify which user each time and then set the settings, and then go to another screen, identify the user and set the settings, and do that six different times, and potentially forget to do one and then realize that he forgot to do one so he has to go back and change it again, all of that is tool time, right?

If you can put that all on one screen and eliminate that tool time, that's an excitement generator for that experienced person, because suddenly it's like, "Oh my gosh! Everything I want is here. I can, in one thing, go click-click-click, and I'm done and it's perfect." Maybe smart defaults would help there.

These are things that you can discover when you go out and do field studies, because you see that movement where they do six different things, and you say, "We can combine that into a single, one-minute activity and be done with it." Field studies are a really important method for going out and discovering this stuff.

Another one is to look at different ways that the organization currently gets feedback. The most common way, of course, is customer support. In customer support, you have people calling up and asking, "How do I do X?"

Well, some of those things that they're asking are things that they want to know and it's just not obvious. That's an opportunity to make it more obvious. That's a way to delight customers by making these things such as you don't have to call to figure out how to do them.

Another thing is, people are calling up and asking for things that you can't do with the application. In some of those cases, those are going to be basic expectations that you're missing. You want to ask the question, "Should we have that basic expectation in the application?" The answer isn't always "yes." Just because someone asks for it doesn't mean you should do it.

The fact is that they're calling support and asking, "Well, how do I get headers and footers on this printout?" You might want to start asking the question, "Why do people want headers and footers? What are they doing with this output that they think they need to have a header and a footer on it?" That will help you understand whether you've missed a basic expectation or not. That's a key piece of it.

The last thing is that there's a variety of lab techniques that you can use to look for things that are delighters and expectations, but primarily basic expectations. The first one that I like to use is something called interview-based tasks.

This is when we don't necessarily give the user an assigned task off the bat. We don't say, "Can you pretend that you are interested in finding out where the company headquarters is located and go look and tell me where the headquarters are located?" That's what we call a scavenger hunt task.

What we want to do instead is interview the participant in the study, so we interview them for a few moments to find out, "So, tell me about the last time you needed to contact someone in your database? What was that like? Who did you have to contact and what did you have to do? Were they already in the database or did you have to add them into the database?"

You start asking all sorts of questions about this last experience. Then you have the task that you have them do in the test. You basically have them replicate what it was that they just asked you to do.

By doing this, you're using their terms and their ideas and you get to see things. What happens is the types of things they show you are problems that would emerge that don't emerge when you create the task yourself and ask the participant to do that task the way you want it done because they're going to create all sorts of little pieces to the puzzle that are asking the user to do things they way they want it done. All of a sudden, you're gong to discover missing expectations that should be there, that are getting you in trouble and you don't even know it.
Adam: That suggestion to use customer support is an interesting one. I suspect the trick there is to avoid that common trap of trying to be all things to all people.
Jared: Yeah, so that's true with any of these things. You have to sit back and ask yourself the question of "What is it that this person is trying to do?" It's not even trying to be all things to all people, but taking things at face value.

When someone calls up and they're asking about the question of "How do we get headers and footers into this thing?" It's like, "OK, well we could implement headers and footers." That's sort of an understood technology, lots of applications do it.

The real question is, "What is the person doing with this printout that they actually want headers and footers for? What is the problem that headers and footers are solving that you're not solving in the application already?" That's the really interesting question.

That's the point where you want to take a moment and actually see what they're doing. Go out and visit them and see the way that they currently are using this and discover that maybe headers and footers is just a hack that they thought of to solve a problem where the problem is something different.

They need to have these things individually labeled so that they can stick them in a filing system so they can find them later, or some other thing that frankly, if you knew the bigger problem, the computer could probably come up with a better solution than adding headers and footers. You want to really go explore what the user's trying to do and that's really key there.
Adam: What about with a new product or service? The CBC asks this and wants to know how do you get to meet the expectations when your users may not have fully-formed expectations?
Jared: They almost never have fully-formed expectations. Expectations are often things that they can't talk about. For example, imagine Adam, that you and I were going to open up a new hotel. In this new hotel, we're going to put in hotel rooms that are really comfortable, really useful.

We ask our customers, what do you want in a comfortable, useful hotel room? They might tell us things like, "Well, I want the bed to be really soft and I want the television to be really nice. I wouldn't mind if there was a glass of wine waiting for me in my room when I got out of my business day and maybe the coffee was waiting for me outside the door."

They could tell you all sorts of really neat little things like that. I'm going to bet though, they'll never say, "Oh, and I want to make sure the shower has hot water." Yet, if you don't provide the shower with hot water, you're probably going to piss off every customer. It doesn't matter if there's a glass of wine waiting for them if you don't have hot water in the shower.

Most basic expectations, you can't just go and ask. You have to explore. You have to spend time actually watching and listening and seeing how things are there. The thing about things that haven't been built today is that even the things that haven't been built today almost always are solving a problem that people already have.

Problems have been around for a really long time and people don't change that much. The example that I like to use, I ask people this, and I don't think I ever asked you this, Adam. Do you know when the first fax machine was invented?
Adam: Oh gosh, no. I mean, I would probably guess in the '60s at some point?
Jared: That's interesting, a lot of people guess that. It turns out that it was a fax machine that actually helped the FBI get Al Capone because they faxed his fingerprint record, so they actually had it in the '30s. It was also heavily used in World War II.

It's a very expensive technology back then. Both sides had to have these very expensive pieces of equipment and they had to match up exactly. The ones that they used in the '30s were over telegraph wire. It turns out that the first fax machine was like in the 1870s.
Adam: Wow.
Jared: OK? If you think about it, all the fax does is it gets a message from one place to another. The problem of getting a message from one place to another, that pre-dates the 1870s by a lot.
Adam: Sure does.
Jared: We're talking the Greeks were worrying about getting messages back and forth, and the Egyptians. The fax is just a new solution to an old problem. For us, it's not even that new anymore. In the total scope of the problem, it's actually relatively new.

Chances are, whatever problem you're building to, it's been around for a really, really long time. You're just coming up with a new way to solve it. What you really want to do is figure out how people are solving it today, because chances are, that problem isn't going unsolved. There are very few things that we design today that are for unsolved problems. They are solved in some form or other. They're just more clumsy than the way you're hoping it will happen.

You need to just spend time looking at how people do things today. If it's really that brand new, then there aren't that many expectations. You've got this nice green field. You can build it however you want.

When Twitter came out, there weren't a whole lot of expectations around Twitter. But it had a completely different set of challenges, because without any expectations, people don't come to the design knowing anything, so now you have to explain to them why they use it and how they use it. Think about those early days of Twitter when everybody just had no clue what they were supposed to do with it or why.

That creates a whole new design problem. If you don't have expectations already, that actually doesn't make your life any easier. [laughs] It just changes the dimension of the problem. I always get worried if people are telling me that they don't think that their users have any expectations, because that means that they are in for a hell of a ride. [laughs]
Adam: So, hotels. Our friends at Marriott ask a question about...Their line of questioning, there were some questions they had during the seminar, that they were actually thinking about how to use this model, and this particular question I thought was interesting. If you have a different audience or different audience segments that show variations in their expectations, do you try and plot those in the same model, or do you separate them out?
Jared: Well, the model is really for the design, so to separate them out gets you into trouble. But if you have folks who have wildly different expectations--and more importantly, conflicting expectations, expectations where they believe that one group of users will expect a feature to be a particular way and another group of users will expect it to never be that way and will be upset when it's that way--you've got a problem, right? You have to map those expectations out and figure out, who's going to be upset when this thing is missing?

I guess you could draw different sets of expectations for different groups of users for the same product. But really, the idea here is, I don't see teams drawing these models very often. The model helps explain what we're seeing, but the model itself, I haven't seen much practical use in the act of producing the product, in other words, sitting there and saying, "OK, this feature is currently sitting at this point on the model and that's why it's where it's at." I think just dividing the features up into basic expectations or just feature growth or excitement generators is a useful exercise, but plotting them on the graph, per se, is not that useful, so I don't think you need to do that.

But this gets to a bigger question, which is, are you designing for user populations that have radically different sets of expectations? If you do, you might consider creating separate designs for them, right?

I can see, for example, designs for corporate travel agents, people who are booking dozens of hotel rooms simultaneously, all the time, day in and day out. I can see their expectations being very different than the casual vacationer, or even the semi-frequent business traveler who books their own travel. I'm not sure that you'd want to try and build one interface for all of those folks, but instead, maybe come up with a way to build separate interfaces for each of them to match each of their separate expectations and delighters.

That would make life a lot simpler on the design side. Of course, it makes it a lot more difficult on the execution and maintenance side. Design is always about trade-off. The question is, how much do you break these things out? I think that you want to at least look at that and see how that plays.
Adam: Jared, how about in corporate environments where the people who are selecting a product or a service are not actually the people that are using it?
Jared: Yeah. This happens a lot in companies, right? Where people are getting an application, say an SAP or an Oracle, or performance-review system or an expense-reporting system or a time-clock system like Kronos, and they need to use this. There's a team out there who's trying to make the best of it, but the features and functionality are being determined by other people.

In some cases, features are added because it makes the HR department or the accounting department's job easier, but it doesn't at all make the job of the person who has to enter the data easier, and so the system is there.

The thing here is that you've got a couple of different things that are happening simultaneously. One is that you have to ask the question, right? The Kano model predicts delighters. It predicts what's going to delight people. But is it the end users, the people who are entering the data, that you have to delight, or is it the people who are choosing the system that you have to delight?

As we talked about before, if you end up in this situation where, to delight one group but not the other is a trade-off you have to make, you may go after the money side of the equation, the people who are buying this thing, and not go after the user side of the equation, the people who are using this thing, and therefore they're going to be frustrated.

What you can do is you can spend time looking at what these people are trying to do, what the users are trying to do with it, and look for basic expectations and places where you can make them happy. But it's always a hard sell because, frankly, if they have to use it, then you don't have to make any investment whatsoever.

The place where the cost comes in, which people don't often take into account, is the quality of the resulting data. That's probably where you want to go with this, is to look at not just the money that gets invested in changing the design, but the money that's invested as a result of really bad data that comes out of the system.

Imagine an expense reporting system, where the accounting department doesn't get what they need in order to get the expenses right. They, therefore, are rejecting a lot of things, and people have to do it again. Morale is now lowered, which causes other performance problems throughout the organization.

If you can start to factor all of those costs and start to look at why all of those things are causing problems, now you have a different formula for that investment side of the Kano model, and you can map that to satisfaction, and it will probably tell you quite a bit.

If you just try and go with the dollars, the end users aren't investing any dollars in the product at all. The developers are. But the developers are answering to the people who are making the product choices, and they're delighted by just having this. They're not going to see why spending money to make it easier for the end users to enter this data is going to pay off if they're already getting the data they want to get. You have to prove to them, "OK, what happens when the data they want to get isn't the data they want to get?"

That's really where you have to go. It's a complicated answer and it's not simple, because it's not a simple scenario. It's a complex space, this idea that the people who pay for it are not the people who use it. The people who use it don't get a say in what's going to make them delightful, and that's where you get into trouble.

I hope that helps answer the question.
Adam: It did for me. I've been in that world, so I know how it can get disjointed.

Jared, this was great. Thanks for joining us.
Jared: Thanks for having me. It's nice to sit on this side of the microphone for once. [laughs]
Adam: And to our friends listening in, thanks for your support of the UIE virtual seminars. Goodbye for now.