The SpoolCast with Jared Spool

The SpoolCast has been bringing UX learning to designers’ ears around the world since 2005. Dozens and dozens of hours of Jared Spool interviewing some of the greatest minds in design are available for you to explore.

Episode #281 Chris Risdon - Shaping Behavior, by Design Live!

June 30, 2016  ·  46 minutes

Listen Now

Download the MP3

Mobile, ambient technology, and connected devices are about mediating people’s behavior in their environments. Uncovering the whys and hows that drive behavior takes empathy, hours of observation, and masterful prototyping skills. You’ll succeed when you make, test, iterate, and learn.

Show Notes

Chris will explore the importance of engaging with users at the behavioral level. He’ll explain prototyping’s key role in closing the feedback loop on your designs. You’ll leave with an understanding of the best types of making and prototyping for observing and eliciting behaviors.

To see the video of Chris's talk, visit the UX Immersion: Interactions section in our All You Can Learn Library.

Full Transcript

Chris Risdon: Today, I want to leverage a little bit the theme, yesterday around prototyping, but really thinking broadly about our process and our making, the things that we output, the process that we have, and the activities we do to get to our design and start connecting that a little bit more significantly to behavior, the behavioral sciences, and having a deeper understanding of how people think, how people make decisions, what biases they have, and understanding what it means to actually leverage that. I think we do it a lot. But we do it accidentally or without a lot of rigor. I want to see if we can push that. This is a UX conference. Most of you guys are in some way related to UX if not identifying as UX designers. But this is "UX Immersion: Interactions." I'm also going to use interaction design as the lens. Among the many skills as UX generalists as we might have in research, in strategy, in IA or usability, I'm going to talk about how interaction design is going to be a critical discipline to level ourselves up in among those many disciplines we have to better utilize those applying in a practical way the behavioral sciences. First, I'm going to start with a made up story, a scenario story. Imagine it's 2004. It's old timey days before airport WiFi, before mobile phones, or at least the smartphones that we have now. In this scenario, you're at an airport and, let's say, it's a two-hour layover. You've got time to kill. You have the time to kill, so you go to an airport bar. You sidle up to the bar to have a beer. They usually have sports or news on. In this case, there's news. Actually, in this case, there's breaking news. This breaking news is the Indonesian tsunami that hit from the Indian Ocean earthquake in 2004. This is a breaking story. It's really tragic. You see the death toll, the chaos. All of a sudden, they're talking about 10,000 dead. Five minutes later, it's 30,000 dead. Then 20 minutes later, it's 70,000 dead. Your heart goes out to these people halfway around the world. You have this sympathy for them. You're listening to the story. In the back of your mind, you're saying, "What can I do? This is really awful." One of the calls to action you see is, if you can do anything, donate to the Red Cross. If there's one way you can help, that's going to help. They need the supplies, the blood, the funds to do that. You're like, "Well, that sounds like a good idea. I should donate to the Red Cross." Again, in old timey days of 2004, what do you have to do? You're at the airport. You have to think, "Well, OK. I'm prompted." I'm like, "I should do that. Someone asked me to donate to the Red Cross. I'll do that." I have to make that mental note, "Later, when I get a chance, I'll donate to the Red Cross." Time passes. I get on my flight. I get home. I walk my dog. Check my mail. Maybe, I'll remember it. Maybe, I'll turn on the TV and see the story again. But now, all of a sudden, I've got other things going on. But if I decide to do it, then I have to get online and go to Basically, it's like a typical e-commerce funnel. I have to enter billing information, my address. Then the hardest decision of all, not necessarily ironically but interestingly, is I have to make a decision here. How much do I want to donate? I could donate a hundred bucks because I want to feel like I'm actually contributing. But that's actually a lot of money that I'm not sure I want to part with. I don't know, 10, 20 bucks, is that even worth it? It's so little money. You have what is a typical funnel. You might have a call to action go out and 100 people say, "That's a good idea. I should do that." At every stage, there's a little bit of drop off, a little bit of drop off, and you might be lucky if you get five people to actually follow-up on that commitment if they're somewhat motivated and then they get distracted. Let's think about that scenario, but now let's go down to 2010. In this case, you have the same situation. You've got the time with the layover. You go to the airport bar. As I go up to the bar, there's a news story. Again, sadly, it's a tragic story that's unfolding. It's breaking news. You go there. In this case in 2010, it's the Haitian earthquake. You're seeing these images again. You're seeing this story develop. There's rubble. There's bodies. There's death. There's the chaos and disorganization. Your heart goes out to these people again. Again, you think, "Well, what could I do?" Again, there's a call to action. Donate to the Red Cross if you can do anything. In this case in 2010, you have a little bit of a different situation. How many people recognize this number? A few. This is 90999. This is the text messaging service that lets you donate to the Red Cross. All you have to do is basically get on your phone, enter that text number, put in what course you're doing, and technically you don't even have to do that, but that helps to know where you want the money to go. They'll confirm that's what you want to do. They'll automatically take $10 out of your next phone bill. Also, now, when I'm really thinking like, "This is something I want to do," I can even take an old timey flip phone. I can do this where I can have it and I don't have to think about the money. It's $10. I don't have to compare myself to other people. If other people do this, they're donating $10, we're all in it together. I'm not even going to see the money. I probably even have the phone bill on auto-pay, so I'm not going to notice that $10 happen. Most importantly, I was most motivated when I was seeing those images, and then they prompted me to do that, when they said, "You should donate," and I was actually able to donate at that time. It seems really simple, and it almost seems remedial, but it was actually pretty powerful stuff. In a Pew follow-up survey, you find out that $43 million raised money via mobile texting for the Haitian relief. With really interesting, though, is that most of these donations were made on impulse. Basically they saw that prompt, and I'm seeing these images, and I'm really motivated to do it. Their interest in Haiti's recovery quickly waned, they really didn't follow-up much. I'm sure it was on the news, but again, like getting home from the airport, you get distracted, and you have other things. What these two things mean, those second and third bullets, is that from that $43 million, if they couldn't donate at that time, when they were prompted, that money wouldn't have come in. Most of that money is probably, in a sense, new money on top of what other people might donate. As far as saying, can this be sort of a new behavior that gets ingrained in how we operate, over half of the donor made text message contributions to other relief efforts. For example, the nuclear reactor incident in Japan a couple years ago. That's pretty powerful stuff. I picked that one because I think it has a lot of impact, but it exists all around us. If you've ever seen, you swipe your card at the point of sale, and it says, "Do you want to donate one, two, or three dollars to some military benefit, or ASPCA, or cancer research fund?" It's like, "Yeah, what's a dollar? I've already swiped my card, I'm spending $32, what's three dollars?" All of a sudden, if we are aligning this, we can actually influence behavior is by thinking about how these things can actually be enabled. That's leveraging behavioral science, whether we think about doing it explicitly or not, that's basically our systematic analysis of investigation behavior. What it is, is a collection of disciplines in academic and research such as behavioral economics, behavioral psychology, there's neural, there's cognitive, there's always things that over the past 20 years and more, longer, but really relevant to us, have been developing. What I'm looking for is, what is the gap in saying, "How do we consciously apply that in our work, where we're touching so many consumers with our products and services?" One of the best analogies I have to convey what that looks like is documentary filmmaking. At the turn-of-the-century, after they got used to the novelty of actually running the camera, so they would fix it in some place to watch people coming out of a church, or people filing into work, the main form of filmmaking was basically filming a stage play. You had the tableau of the stage, it was a scripted play, and let's just film that. You can scale distribution by taking that, and showing it to other people. Documentary filmmaking came a little bit after that, and that seemed really interesting, like, "Oh gosh, editorial filmmaking." It was thought of as the objective form of filmmaking, where they call that life as it is, like a reflection of what was happening, and a factual film which was dramatic. We know now, we're pretty savvy to know that every decision a filmmaker makes, especially when we're talking about non-fiction work, who they decide to film, what they ask those people, or what those people say, where they shoot, in what location, how they edit, obviously, what music they put to it. The fact that they have framed it into an hour, or five hours, or three hours of story that's actually probably expansive. All those elements actually affect how we, the audience, receive and perceive that story. You hear things like, "That was taken out of context," and things like that. We know that documentary filmmaking is actually a bunch of decisions that influence our perception of the story being told. As designers, I don't think it's any different. I think every design decision we make influences the user. In some cases, it's very benign or benevolent in what we do. We want to use smart defaults in a form, so that we help people along to complete the task that they're having, because we know this is likely what their decision would be. That's a decision, and that's the decision that influences the user. Then, doing things like point-of-sale stuff, that's a decision, and that influences the user. One of the terms from behavioral economics in Thaler and Sunstein's book "Nudge" was choice architecture. That's organizing the context in which people make decisions. That, I think, is reflective of what we do, sometimes with real intent to influence people, and sometimes not. We're doing that filmmaker work. We're editing, we're making decisions, we're deciding what design patterns to use, and how they influence. We think about things, particularly from interaction design, like framing, and affordances, and feedback, and feed-forward, and we are making decisions and essentially editing what happens so that we can see how we influence people's behavior. A popular thing from behavioral economics case studies is the idea of rearranging and reconfiguring a cafeteria. If you're a large employer, you have thousands of employees, and in their best interest, you want them to make better dietary decisions. You label things in a certain way, you move the sweets, make them harder to get, you move the healthy stuff up front, you make all these little decisions, the choice architecture, and you're not necessarily being what they call paternalistic. You're not trying to be a complete nanny state in that you're taking away choices, you're architecting the choices. I can still get the sweets, if I'm motivated. This is really interesting, but it's also thinking about, and these are the case studies you often hear if you want to go and research behavioral psychology, and think about how to apply to design. It's fascinating, but it also doesn't scale really well. We live...most of the people in this room live in a world where they're designing products and services that have to go at scale, into the market. It's telling that what it means is the cafeteria case study and many others like it, that the changing environment is actually the most impactful way to influence behavior. Other little ways, "Oh, I don't want to keep charging on my credit card." Someone will say, "Put it in a block of ice in the freezer. That way, you've changed your environment and that is not actually on your person when you go out into the world, and you have to go to all this effort if you really do need to use it, means that you have to be motivated, there has to be a good reason for it." Changing environment is the most impactful way to change behavior. As I said, generalizing, we that do consumer products and services in the enterprise, and for consumers in the app store, and software, and web, we're working at a certain scale where we can't go and keep rearranging everyone's environment. We work in a world where our products and services live in an environment over which we have little control. We might be able to accompany them in that environment, by being in their pocket, so to speak, but we can't necessarily control the environment. If we want to think about influencing behavior, or if we at least want to think about what our power is, that we are responsible for in potentially influencing behavior, we have to think, what is in our control? What do these things lend themselves, in interaction design, and in UX design, and in design in general? We can influence how people perceive the environment. That may be simply framing something in a certain way, or it might be literally, like using augmented reality. We can influence how people navigate the environment, again, to a certain extent, "Here are the steps you can take, let's foreground that. I might want to do that." Or maybe literally, we're mapping what people should do. We can influence how people interact with the environment. These are the things that should start to relate like, "Yeah, I have a certain competency or capacity to think about these things. I'm understanding behavior, because I do research, and I'm understanding interaction patterns, and design patterns, that I understand can influence people." I'm trying to make that connection, if I want to influence behavior, or at least I want to be aware of the influence that can have on behavior, in what lends am I looking through to be able to do that, because I can't go and rearrange the cafeteria. I can't rearrange their home. I can't necessarily take away the temptations that may be walking around and things like that. In a crude, overly reduced, and overly simplified way of thinking about where we've come from, technology design, interaction design, processing allowed us to do things better. Do our spreadsheets, and do our documents better, and then we're able to make that faster, and make that a little bit more portable through laptops. Connectivity allowed us to be connected to things, and so we went from, even though they still exist, productivity apps, to apps like your Facebook, and your Netflix. I want to be connected to my movies, I want to be connected to my email, I want to be connected to my music through iTunes, and also those can go with us. Now, what we have is the sensors that literally live in our person. The connectivity, the processing, and the sensors are added to the mix. They're with us. They're on our wrists. They're in our pocket, in our phones. What you notice, then, going from spreadsheets to being connected to really interesting social things in our life, to being connected to us, is that literally the closer technology gets to us physically, the more it becomes about us. We can sit there and look at extremes, but in a simple way, to think before, calendars were about your appointments, and the events that were happening. They're like that, still, but now all of a sudden, because calendars can access other apps, calendars can do geolocation with the sensors. They're actually trying to be smart, and trying to optimize, and it's really about your behavior. It's about, let's not be about your events. Let's be about maximizing your time, and taking opportune moments. We're going to be smart about the events, so that the events don't seem like a burden. Even now that the calendar's in our pocket, and it's got more sensors, and it's got more connectivity, it's thinking more. The calendar is more about us, and what we have, and our time, versus tracking and being connected to your actual appointments. Most of these features, most of this idea of influence actually exists at the micro level. Meaning there's features and instances that are really good at spurring conversion, really good at spurring on-boarding, and there are those types of things. A good example is obviously Microsoft OneClick. They've taken the friction away from shopping, of going through the checkout cart. If you've done retail, you know that's that funnel. Every little piece of friction is going to have some effect on people whose commitment is going to wane a little bit. If we take that away, will you buy it? They love it, because they want you to buy more things. You might like it, because it's actually kind of convenient. Then there's another psychological principle happening in there, as well. If I normally go through the typical checkout process, I've put something in there that's $30, I've put something in there that's $20, I've put something in there that's 50 bucks, and then I have to decide to spend $100. Do I want these things? Maybe I'll take this one thing out, because I think $50 or $70 is more palatable. Here, it's psychologically much easier if I buy something for 20 bucks, because that's just 20 bucks, and if I buy something for $30, because that's just $30, and if I buy something for 50 bucks, because that's just 50 bucks, versus seeing them all together for 100. There's a few things at work here, but it's at this feature level. It's trying to spur conversion, but it's also making something convenient for us. Now we have products that are at the macro level, the actual products or services for behavior change. Those are going to be obvious from your Nest, and Jawbones, and FitBits, and budgeting apps, and getting out of that. If we look at this on a scale, thinking about influence and persuasion, which can be scary words if are thinking about designing, it's not binary. It's on a spectrum. Actually, where it lands on that spectrum is pretty much going to be defined by the intent of the people designing or implementing those services. If we were to sit there and think about another dimension, which is the transparency, to what extent do you know that this thing was designed to change your behavior? You might have things that are on the utility side of the spectrum, where we're not trying to really influence you. We might have some influence, like those smart defaults, but we're not trying to. Versus on the other side, yeah, we want to influence you, but we're also saying, "Hey, we want to influence you." We're saying it out loud. You have the things that were more about being connected products. iTunes is going to connected to your music. It's not going to tell you how to make a playlist, or what to do with that music. The same thing with your images, the same thing with your emails, these are utilities, these are tools that allow you to, in theory, do things your way. Obviously, within the limits of what features and what capabilities they want to support. Then, some things, like that smart calendar and other things like that, where they might start to move a little bit, because they start doing some selective editing. Certain things like, "We don't think we should just support all the features. We're going to support these features." That's going to have some influence, fairly benign, but you know it. They'll sit there and say it, what you're doing. That's why it's above the line. All the micro things, they don't declare a value proposition. They don't try to say what they trying to do, so this is where you live, under the sea, so to speak. They're often not trying to obfuscate what they do, they're just there, and they often, if done right, are there. The smart defaults, form design, where we might make a decision for you, that influences your behavior, because you might be less likely to find actually the right decision, because you'll be like, "Oh, I don't need that." It's kind of like if you see doing TurboTax, they have these little cues where they ask you something obscure, and they tell you, "This doesn't apply to most people." Usually you can go, "Yeah, I don't pay taxes on farm animals, but I can move on," but you might just look at this doesn't apply, and you might miss something. If they manage that right, that might be rare. Other things, progress indicators, that's trying to influence your behavior by keeping you on the path, but it's also trying to help you so that you know where you are. Visualizations, framing things in a certain way, opt-in, opt-out. All these things are hard to quantify, and they can all move. Opt-in, opt-out can be really used for evil, and dark patterns, and they can also be used for good. Amazon OneClick has a feature there, persuade. You're in the middle of these more ephemeral items that are connecting our sensors, and our connectivity, like Pinterest and Facebook. These are what you'd have as internal mechanisms, and thinking about behavior. They have variable rewards, and they try to be really sticky. Internally, they want to influence you. Then, we could go a long talk about ethics, but really the shortcut and ethics, when you use this model, is this lower-right place. If you want to influence someone, and persuade someone, without offering anything in return, even Amazon. I actually want an easy shopping experience, so I actually like it. They obviously want me to buy more, but I like it. If they sit there and go all the way, where it's completely in your self-interest and not in the consumer's at all, that's going to be manipulation. You manipulated them. If you don't want to just, not announce it, like OneClick for a progress indicator, you actually want to obfuscate what your intentions are, that's deception. That'll be things like dark pattern opt-ins, like, "I didn't actually opt in to this newsletter," or from a product standpoint, things like, "I think I'm getting something for free, and actually I've signed up for something." They've hid their intention, and then there's no value in it for me, for what I'm actually paying for. That's where you can do that. This leaves the upper right. The upper right is where, I won't call them new, because this type of service or products have existed, smoking cessation, weight loss, but it's new relative to our new world of technology and sensors, in that in the past five, six years, it's really started to explode and become a real focus. Again, the technology close to us makes all these products and services that are about us. I sometimes call this the new "me" generation, because the things we're buying now are so much focused on us, and our behaviors. If you look at that a little bit, if you look at that area, I call that behavior changes value proposition. Which really is, hopefully self-explanatory, but I think it becomes different than other types of products that have a different type of value proposition. Products and services designed and marketed on their premise that benefits that value exchange are specific, behavioral based outcomes, and outcomes that are essentially your goal. "I want to lose 10 pounds. I want to lower my cholesterol. I want to get out of debt. I want to save." All of those things, I actually won't get to them unless I do certain behaviors. What I really want is a product that will augment my ability to do those behaviors, to help me do those behaviors. Besides this identifying, there's some key characteristics that you're set to know about them, that you see in the wild. The first one is obvious, this comes from this behavior-based progress. There's also that users self-select into the Val prop. If I go back to the cafeteria, that's what they call an interventionist method. I didn't decide I wanted the cafeteria arranged so that when I go get my lunch, my employer decided to do that for me. In a sense, they intervened in my life. When we buy things off the shelf, proverbially, if we download an app, or whatever we might do, we're self-selecting. We're saying, "I actually want this influence." That changes a little bit, because it changes things like accountability, self-determination, how much can you actually hold people to something when they're saying, "Well, I'm paying for this. Maybe I should, maybe I shouldn't." Data collection is a prominent mechanism. Those are the sensors, we have to know what people are doing in order to understand their behavior, and influence it. There's that sense of augmented ability, where these products are augmenting internal or external behaviors, the things I do out in the world, and behavior change management. The other important one is that the value proposition is time-released. What that means is that most value propositions are what you'd call off the shelf, out-of-the-box value proposition. I get my iPhone, I sync it up, and I can use it. I can get on the web, I can download apps. If I put a Nest on my wall, or I put a FitBit, or a Jawbone on my wrist, or I download an app for my budgeting, I think, "I got something cool," and that feels like the value proposition, a sexy thermostat, or a really cool wristband. The value proposition is actually that goal. I want to lose pounds, I want to get fit, I want to get out of that, I want to lower my energy bill. That's not going to happen, out-of-the-box. That's going to happen in a week, in a month, in a year. I don't know if we've cracked the nut of understanding that, because that also exists in a lot of the things we do, in the world and financial services. I know from my workshop, there is a ton of financial services here, and how people achieve their goals in healthcare, in government, in a lot of even smaller ways. We have to understand, the thing we might offer might not actually pay off out-of-the-box. What does that mean to show and validate the value proposition as you go? That's not just the thing you technically get off the shelf, whether that's an app you download, or a physical device you wear. Then there's internal influence. Those are the Snapchats and Facebooks, again, recognizing this in the wild, for better or for worse. I don't know if you've ever heard of Nir Eyal and his book, "Hooked," but he talks about some of these internal mechanisms that happened. Trigger the action, the thing you do, but it's ideally making it a very simple behavior, like uploading it, sending an image, or scrolling through a stream. A reward, so that even the idea of, "I'm going to scroll through my stream, just to find the right update, or the right image," is actually this idea of variable reward. Where can I find it? And investment, what can I do when I start using this after a while? In old timey days, we'd call this being sticky, and increasing switching costs, and all this stuff. It's the idea that I've invested in it, so I'll keep using it. This can seem really manipulative, and most of those products and services don't always do this in such an explicit way, but the mechanisms are there. You can use this for really altruistic reasons. Let's say you want to help somebody get out of debt, or you want to help them get fit, and you know, in order to do that, you need them to be engaged in an interface or an app. You might need to apply some of these internal influences, to get them to keep engaged with the app, in order to get the benefit of what the app will give them, as far as their external behaviors. Those external behaviors of having an ability to do something else, like actually run three times a week, or pay down my credit card faster. Thinking about those now, as a setup, in what way can we think about our capacity as designers to leverage that? Ideally, responsibly, effectively, but at least very much more awares, so to speak. Interaction design is a particularly good discipline in which to think about it. This exists across, but interaction design tends to be thinking about behavior, or mediating behavior, shaping behavior. In some way, thinking about the behavior of the system, and then that behavior often is designed to help people reach their goal. We already know they want to reach their goal. We already know behavior change is all about achieving goals. I think there's a lot of ways we can leverage the things we have as discipline to make the connections between all this academic theory, and the research that exists, and actually think about, then, how to make that leap to be more aware, and more cognizant, and more rigorous in how we apply it in our work. The real simple framework I use, that I think can apply elsewhere, but I often use from the behavior changes value proposition is collection, story and communication. That's really thinking about the data and the sensors, and what our abilities are to understand people's behavior. It's the framing and anchors, leveraging some stuff from interaction design, and designing in general, to understand how we can take this data, and turn it around as something that's a prompt, and is actionable. Then, what is our capacity to actually then, in a sense, intervene? What are the feedback and feed-forward things that we can do? To break that down a little bit, we can start with the data and sensors. Again, I've already talked about what we have our ability. We have all the sensors we now have. Biometric, image capture, RFID, accelerometer, GPS, on our wrist we can get heart rate. We can get galvanic skin response. We can know whether people are perspiring, we can do all the blood pressure. We have these sensors, and that's really powerful stuff, but we also have our attitudes, and our perceptions about this stuff, which is going to be things like our profiles, our status updates, our shared credentials. I remember a big deal, in whatever it was, 2005 or thereabouts, when a product like Mint came out. "Oh, are people going to use this? You have to give them all your passwords, and your usernames. I don't know if they'll do it." Now, we don't even hesitate to do that. Our attitudes, and our actual behaviors around our privacy which we're happy to exchange if the value proposition communicates with us, is also changed. We think about that when we think about data collection, and understanding sensors, and how we can leverage those things. When I first talked about this stuff five years ago, I said, if it can be connected, it will be connected. That time has passed. It is going to be connected. Connected clothes, connected basketballs, connected cups to analyze your liquids. It's all going to be connected. Connected to our car, so that our behaviors around driving, whether we're using too much fuel, too little fuel, going too fast, braking. We are talking about getting closer to us physically, we're really getting to this point where we're going to be on our person, and it's going to be physically. We tend to think about our behaviors and health, by having sensors that are literally on our eyeball. Then framing anchors, again, pulling some things, really coordinated interaction design. Thinking about, we get all this data, and then getting the data is really half the battle. As a matter of fact, people don't really want a relationship with their data. They want to achieve behavior-based goals. Collecting the data, in their minds, is just a means to an end. They didn't say, "I really want this information." What happens is they get all this information, and they have to do their own little cognitive load of, "OK, I walked 7,000 steps. Is that good or bad? All right, I actually climbed this many stairs, is this good or bad? I burned this many calories, I don't know, is this good, or is this bad. My average heart rate was 62 beats per minute or something like that, was that good or bad?" This is cognitive load. This is calculations. The whole reason why we have computers is so that they would do the math for us. Then we had to say, for all these variables, as a whole, is this good or bad? We shouldn't have to do that work. That's going to be the friction that starts to lose it, because we don't care about the data. We care about what the data means. We also know that math is hard. We don't want to do it. Computers were actually designed to do it for us. To talk about how this comes into play, and why framing is a really critical point to thinking about this, I'll go through a little case study. In the book called, "How we Decide," by Jonah Lehrer, they took two groups of doctors, as a study, and they gave these doctors a hypothetical scenario that the Asian flu is going to hit, and left unattended, it's going to potentially kill 600 people. They want to see, what course of action would you take if you were giving two options, two courses of action to potentially take? The first group was given one. Option A, you take this option, then you're guaranteed that 200 people will be saved of those 600 people. If you go with option B, you're going to have a third probability that 600 people will be saved, a third that all of them will be saved, and about two thirds probability that no one will be saved, so we're dealing with the odds. The second group had two options. Option A, 400 people will die, and option B, a third probability no one will die, and two thirds probability 600 people will die. You've probably noticed that technically, these are the same options. They are framed in different ways, and there's a couple of things that are going to go on. One is the idea of loss aversion. "That's very certain, I can save 200 people," versus, "That's kind of scary, 400 people will die." If we look at the results, about 72 percent picked option A, and 28 percent picked option B in that first one. In the second one, it was almost completely reversed. It's not just certainty of who, in the loss aversion. There's also a little bit of, "Am I even going to bother? The cognitive load here, I'm going to go with the certain. If the certain is someone's going to die, I'm going to avoid that. If the certain is that someone's going to live, I'm going to go with that." It isn't graphic design, it isn't necessarily a glowing leaf on your thermostat, but that is framing. That is actually the power of it. I've seen other examples that are really novel. If I gave you two beef patties, and I said one was 75 percent lean, and one was 25 percent fat, which one are you going to pick? You're going to pick the lean one. This shows, when you're talking about people's lives, how powerful it is for people to have something framed, and actually influence their behavior, their chosen course of action. How do you add meaning to that data? Another example of framing, coming from David McCandless, a fairly famous "Guardian" set of infographics. Who spends the most? Who prioritizes military the most? The obvious one might be, well, who spends the most? That's the US, by a long shot. As a matter of fact, all those little boxes are to scale, relative to the big boxes, and they all fit in it. What else do we actually think about by, who spends a percentage of the money that they actually have? Their GDP, who's prioritizing from how much money, relative to all the money they have? Well, that's going to be Myanmar. Then you have, of all the people that they could have in the military, who has the most people in the military? That's going to be North Korea. I'm not making any editorial value against any of these. What I'm showing is how you have this data set that can be shown in different ways, to tell different stories, just like editing a documentary film. The power of framing, and the power of anchoring to something, anchoring the numbers, anchoring to visuals. Then we have the idea of feedback and feed-forward. We probably are really familiar with feedback, and hopefully some of you are at least familiar with or can infer what feed-forward means. The thing about feedback is, you probably understand what that means, right? You get a little navigation in the email saying, "Yes, your email was sent up." OK, that was feedback saying that I completed my task successfully. We take for granted the power for feedback, to deliver feedback in real-time or near real-time right now. As the example, a couple years ago, I was watching the show "Mad Men." In it, it's about this advertising executive Don Draper. He had this wife, and this season she was an ex-wife. The actress that played her was pregnant, and they added prosthetics, and they wrote into the roll that she was gaining weight. She was a very Stepford wife, and had been a former model, so she had a lot of vanity relative to her appearance, so this affected her a lot. She joined Weight Watchers, right? Weight Watchers, they still operate very similarly in that you join a group, and that on a weekly basis, you get weighed, "Biggest Loser" style, to see how much you lost, or gained. In the '60s, personal scales were large and expensive, and in general, people weren't as health-conscious about that anyway. This was a feedback loop. A once a week feedback loop that she opted into, to know all the behaviors and decisions, the effect of them. Instead of saying, "I ate something, and I actually know that contained 600 calories," once a week you have that feedback loop. Now, we know we can get instantaneous feedback loop, and we can combine that feedback on our weight with other information, and get other feedback loops as well. This has worked in other ways, right? 30 years ago, most of you guys, some of you guys have never even had a check register. If you wanted insight in a feedback loop into your finances, you manually recorded everything, you reconcile that with the statement, so you had a once a month feedback loop to see where you stood in your finances. Then, spreadsheets help you do some of the load, and be more accurate, get that feedback loop tightened up, and then off the shelf software, like Quicken and Money. At some framing for you, so the feedback loop there and maybe even connected to the Internet, so you might get a quicker feedback loop. Now we know, I can swipe my card, and my phone will buzz with feedback loop. What's interesting is, this is an opportunity, because it buzzes with the fact that I spent 50 bucks. Why doesn't it buzz with what that means for me? It doesn't buzz until you add meaning to that data. Nonetheless, we're closing that feedback group, and it's almost easy to take for granted that we have that capacity now. Feed-forward's really interesting, and feed-forward is, the smarter we get, the more we can actually guide people in what would happen if you took certain behaviors. That Amazon OneClick, again, what you see is a feed-forward. If you order this in eight hours, you will get this on Thursday for free. If I click this button, there's feed-forward that will tell me, this will arrive for free, no charge, by Thursday, or I can feed forward, and say it can arrive by Wednesday for four bucks. With all these senses and data, feedback loops are inherently limited. They have a cap because when you're thinking about people's behavior, what happens is, once I engage in the behavior, I get the feedback loop, that means I'm getting information about my behavior after I actually did it already. We want to get smarter, we collect data, and we can be intelligent in this geolocation stuff. Eventually we can start to, instead, head that off and give people the behavior that they might want to do before they engage with it, versus telling them they did a behavior, after the fact, and that was good or bad. A little analogy I use is that I know, and think about the psychology stuff, and irrational choices, and yet I still go into the sandwich shop, and I order the salami sandwich, and I get a soda. I go to the front, and I'm at the register, and I know I went in thinking I don't want the cookies, but then I see the cookies, and I buy the cookies. I know better. I even know I'm not supposed to be doing this. I get the cookies. What happens if I go into the sandwich shop, and my watch or my phone buzzes, and I look at it, and it tells me, "You usually do this, but you set some goal in some system, and we're going to tell you, this is what would happen? This is the feed-forward of what you'd do if you do this behavior." "But you also do this behavior, and this is feed-forward if you do this." Now they've framed of for me, and they've foregrounded it for me, so something that seems abstract like, "Oh, I'll worry about the calories later," has become more tangible, and it can influence my behavior. Feed-forward exists, you look on your credit card statement and it says, "If you pay minimum payment, this is what you're going to do. If you pay more than minimum payment, this is what you're going to do." Imagine this more dynamic, sitting in your pocket, buzzing you at certain points so that you actually are thinking about it. It exists, but who looks at their statement actually thinks about this? Let's talk a little bit, quickly, about what we make. There's a case study there, I'm going to cut through for time, so we can take questions. I want to talk about it now, in our process. This is a very generic version of a process that design counsel started to promote, the double diamond, about 10 years ago, I think. The idea is we flare out because we want a lot of inputs. That's the first part of diamond. We make sense of all these inputs, so we come in there, and then we want to make the leap, so we often produce artefacts that help us make sense of that, and help us push those for the personas, and mental models, and journey maps, diagrams, and models, and insights, and principles. Then we move in, we think about, "What could we do?" We open up again, and we think, what do we actually want to do? Let's prototype that, let's deliver it. That's the general thing. We can sit there and think about behaviors, like if we think, "OK, if we're modeling the behaviors actually exist, we can start to break down what those behaviors are, and what the action is." There's all kinds of things, even in a physical service like this, that has feedback, feed-forward, framing, and anchoring opportunities. Then we can sit there and think about testing those behaviors in real-time, and again, all those things. I'm going to go through this, because I don't want to take too much time in the prototyping, but this is a project where we really thought through this from a behavioral economic problem, but in research. In research, we do a lot of activities, and we can think about, we're often after, what are people feeling, thinking and doing? That's our goals, when we're doing research. What's happening there? We can start to take things from the psychological side of things, and start to apply those as a lens through our research. We can actually understand knowledge. If I don't know much about something I should be doing, if I didn't know recycling existed, I might not want to recycle. Once I understand what recycling is, some people are going to want to recycle. What is their motivation, and in what way? We also call it, what are their motivational ways? When I talked about the Red Cross, I'm intrinsically motivated to donate to something, or not donate to something, and that's hard to move, but if I'm kind of motivated to do something, there are times when I'll be more motivated. When I'm seeing these images, and I'm prompted when I'm watching the TV, and less time when I'm home checking my mail, and I'm walking my dog. We think about their motivation, we think about their ability. What is their sense, what is their actual ability? "Can I donate to the Red Cross right now, or can I not?" You can look at the research and understand, they say they want to be doing something. Are we looking at what their capacity to actually do it is? What are their doubts and barriers? Also called FUDDS, fears, uncertainties, and doubts that might be holding them back? Is that something you can design to address, like the security issues when Mint first started? We have to address that. We have to make people secure about giving us their things. Self-efficacy is a thing. It is like what is their belief that they can actually do the behavior? Even if my wife buys me an activity tracker, do I have a sense that I would even be able to increase my steps, or run three times a week, or change my diet? Can you design it to address that belief, so that people feel that sense of self-efficacy, that sense of ability to actually engage in? In strategy, you might think about where, when I do things like journey maps and experience maps, and look at things, people's behavior over time and space, and actually understand, can we identify when those motivational waves are, identify when those abilities are. Can we look at where we can reflect feedback, so they can be smart, with sensors? Where are the opportunities to actually foreground something to be framed? This is where we're developing empathy, and understanding, and crafting our insights. Ecosystem mapping, I talk about how we have little influence over their environment, but that doesn't mean we don't have any. We should also still understand the environment, so that we can understand where we might influence their perception of the environment, their navigation of the environment, or their interactions with the environment, if we can't actually do the environment. Understanding the world they live in, the things they interact with might be your app, and your service. What does that mean? If it's an app for financial services for checking, it means that it'll probably exist in an app of other financial services, which exists in a world of a larger financial service industry, which exists in a world that's either cultural influences, or regulatory influences. The more we can understand this, when you're developing that empathy, understanding and insight, especially the context of the environment, the more we can understand how we can leverage either the environment, or that perception navigation or interaction with the environment. Again, we can sit there and think, what has more direct influence, and what has less direct influence? There's a method called back-casting that I hacked for thinking about behavior change, and behavior change outcomes. Back-casting is obviously like forecasting. What is our target state? It could be, we want our target state to be that were people who should have life insurance have life insurance. It can apply to general products and services, not just the behavioral change and the value proposition. What is the process? We want people to be happy with the tools they're using, or whatever. What is the outcome? Let's actually back-track, back-cast what the things are that has to occur to get to that outcome. You're doing that, and I use a random acronym, but it breaks down the components of thinking about back-casting. This is what it looks like. Really, it's just a workshop-y method, where you look at potential outcomes. You can have this inverted, too, where you have multiple current states, and you go to a single state. There's no real rigor to it. It's a matter of just looking at really defining that future state, and then looking at the anatomy of the path to get there. What behaviors have to occur? What things in the environment have to occur? What influences have to be changed, things like that. OBI stands for outcome behaviors and interactions, and this is how we get to, we're not designing outcomes, and we're not designing behaviors. What we think about, when we back-cast, what we want to do is, identify the outcome, those future states achieved by behaviors. We want to determine that, that's the O, and in determining that, when we understand the outcome, we can sit there and truly say, what behaviors have to occur? Very similar, but lowering your cholesterol, getting more fit and losing 10 pounds are all technically different outcomes. They might have different behaviors to them. We can identify the behaviors, in order to get to that outcome. Then, the thing we can actually design are the interactions. We can design that to know, what can we do to help somebody do that, in that we can leverage anchoring, we can leverage framing, we can leverage feedback, feed-forward, we can collect data, we can leverage other elements and design, but those are the things that are actually designed. The moment we truly have very crisp behaviors, that we want to help support people do, even those could be within traditional products or services, like help people communicate with their team, help people acquire a product that they want to acquire, we can understand how we design the actual interactions or the actual touch points for those behaviors. Insight combination is another very simple ideation method. If you have those behaviors, then you have all these patterns. Insight combination, developed by Jon Kolko, is really saying that we have these insights about people. People want to do this, people don't like this. Then we have these patterns. Those could be small patterns, like tab navigation, and swiping for interactions. As an ideation, you force combine them. I'm going to randomly pick and insight, and randomly pick a pattern, and put them together. In this case, around behaviors, there's a ton of behavioral patterns that you can use, like social proof. The idea, I might do something that I see other people engaging in. Interlocking, the idea that things will only work in a certain order, which means it's going to force your behavior to go in a certain way. You can use it. You can take your insight, the thing you know about people's behavior, that you might have done in your research. You can take that, and you can mix that with the design pattern. In this case, it would be like the sharing economy. Uber, for this insight, you force mix it. Then you time box that ideation, for 5 minutes or 10 minutes. Thinking about this kind of behavior, there's all these places you can get insights and behaviors like the social proof, and the interlocking, and set completion, the idea that if I've got three out of four things, I'm going to have this little internal mechanism that makes me want to get that fourth thing. Dan Lockden has this Design with Intent toolkit, it's a downloadable PDF cards that you can get, that has a lot of these. Steven P. Anderson, mental note cards, has these. Fabrique is a French design firm that has these more for, if you're more in the graphic design or visual design world. Gestalt theory, color theory and how that affects behavior. Cialdini, he's a professor emeritus at Arizona State. He wrote a book called "Influence in the '90s," very marketing oriented, but he has these weapons of influence. If you read them, they sound very sinister and marketing-oriented, but social proof is there, commitment and consistency, that idea that if you commit to something, you might be more likely to follow through with it. If I tell myself, "I want to do the couch to 5k." All right, I might do it. If I post on Facebook, "I'm going to do the couch to 5k, maybe you'll support me," I'm more likely to do it, because I've committed it out loud to it, and people know it, and I've got this internal mechanism of consistency. Liking scarcity, we see scarcity like, "Hey, be on the list, you're 40,000 in." I'm like, "Ah, I've got to get down here." If you invite a friend, you can jump the list a little further. "Oh, I'm going to invite a friend, so I can jump the list, because this thing is scarce and I should be a part of it." They're very powerful. You collect these things, you understand what. Again, this is making the leap between what we know about behavior, and the actual, tactical ways we can apply it in our products and services. We take those things. We've done our research, and we've identified these insights, about their behavior, or we've identified from the back-casting, what the behaviors are. We said, "Let's take this behavior, let's take social proof, and let's just ideate." You come up with tons of ideas, but the idea is to be very generative, and expansive about it. Again, what are the things we have at our disposal? Interaction design elements, Dan Saffer, who's going to do a great book, "Designing for Interaction," talks a lot about the things at our disposal when we're trying to shape behavior, or mediate behavior through systems. The things that we have here, that we've already talked about, just a sampling of the things that we should start to make sure we have competency in applying. The way I look at our design processes, we know that I almost want to make our double diamond time base. I know at the beginning, we're identifying aspirations, those needs, motivations, and goals. I want to be healthy, and that actually manifests itself in lowering our cholesterol. At the other end, is our ability to actually design something effective, sustainable, and viable to that... We can think about that process, really as a journey relative to that. I like it when Paola Antonelli says, because we often think about how can designers influence behavior, but she poses, we should look at what kind of impact people's behavior should have on design. We should constantly be seeing what's happening, behaviorally and culturally, to doing that. Again, it manifests in its work. It isn't always about behavior change. It might be about the typical enterprise or consumer products and service we have, but I think if we have more intention in making that leap, we're going to be smarter about, and being smarter about the psychology of our end-users, we're going to be smarter and more intentional about the things we design. Thank you. [applause]