Episode #248 Bruce McCarthy - Product Management Meets UX
Listen Now
Product roadmaps are a useful tool for managers and the development they oversee. Usability testing and research informs user experience decisions. Both of these goals, in the end, benefit the users. So why can’t your process contribute to both of these goals?
Show Notes
Product roadmaps are a useful tool for managers and the development they oversee. Usability testing and research informs user experience decisions. Both of these goals, in the end, benefit the users. So why can’t your process contribute to both of these goals?
Bruce McCarthy, through his years of experience, has developed a methodology to get the product and UX teams working in concert. Using clickable prototypes and mockups lets the product team prioritize their roadmap and the UX team get early feedback. This creates an environment to inform the design without committing a lot of time and resources to it. With both teams validating their assumptions you can arrive at the right path faster.
Bruce received a lot of questions during his seminar, Lean Roadmapping: Where Product Management and UX Meet. He joins Adam Churchill to address some of those in this podcast.
- How do you handle disagreements on what should be prioritized?
- Should you have separate road maps for product development and higher level management?
- When is it ok to use a lower fidelity prototype?
- How do you find interview participants for your research?
- What approach do you take to sifting through the data you collect?
- How can you be confident when showing the design to only a small number of people?
- How does this process apply to a more mature product versus an MVP?
Full Transcript
Adam Churchill: Welcome, everyone, to the SpoolCast. A few weeks ago, Bruce McCarthy presented a fantastic virtual seminar for us called "Lean Roadmapping: Where Product Management and UX Meet." The recording of that seminar, along with 175 other UX seminars are now part of UIE's "All You Can Learn." You can find that there.
In today's podcast, Bruce joins us to discuss some of the great questions that came in from our audience during his presentation. In the seminar, Bruce talks about the collaboration between UX professionals and project managers.
He asks you to imagine a conversation between a UX designer and a product manager that delves into big-picture business goals and fast, low-risk ways to test design ideas that might achieve those goals.
In the presentation, Bruce shares how to make those conversations, and way more important, their amazing outcomes happen. Bruce, welcome back. Thanks for spending a bit more time with us.
Bruce McCarthy: Thanks, Adam.
Adam: For those that weren't with us for the day of your presentation, can you give us an overview on what you shared?
Bruce: Sure. We talked about a four-step methodology that I've developed over time for testing out product road map concepts using, among other things, mockups and clickable prototypes and minimum viable products to validate or modify and retest your assumptions.
That way, when product management and UX work together like this, each benefits. Product managers get the data that they need to support their road map priorities, and UX gets the early feedback that allows them to optimize designs before committing a lot of development resources.
Really, they're both participating in this lean, early discovery process, really validation process, which allows them to correct their course early and often and arrive at a better place with less effort.
The four steps I described that I use in general in various projects are, one, discover market problems with what I call qualitative interviews or open-ended conversations with potential customers.
Two, prioritize those problems with quantitative research like surveys or other ways of aggregating more data than you get in a single or a few conversations.
Three, validating your product vision and direction with mockups. Four, validating your assumptions around what your MVP is going to be and what your roadmap after that minimum viable product is going to be with clickable prototypes.
We talked through three case studies that I was personally involved in as the project manager that walked through those four steps, or most of those four steps. Number one was my own start-up.
Today I am the CEO and chief product person for Reqqs, R-E-Q-Q-S. You can find information about that at Reqqs.com, which is a product for product people. That helps them prioritize, and publish, and road map, currently in a closed data. We walked through that example.
We walked through the example of a product for a startup I was part of called Net Prospects. That product was called SalesProspex, spelled Sales, and then P-R-O-S-P-E-X. How we developed that product and got it to market through a lot of this testing process.
Then we also talked about a product that I led a product management team at ATG, Art Technology Group, to bring out, which was ATG's version 10 that focused on multi-site ecommerce capabilities.
One of the things that I've repeatedly discovered is that it's important, when you have an overall framework, that you adapt it to your particular project, your particular situation, your particular team, and market, and so on.
Each of these projects was a little bit different in terms of how we used those four steps. With Reqqs, for example, we did all of those step and more.
I discovered the product people's various problems that were similar to mine or different than mine by doing a bunch of qualitative interviews at Starbucks', and over the phone, and over Skype.
I did a whole series of about 40 different interviews. I prioritized them using a user voice forum, where I could invite people to vote on which problems were more prevalent and more painful to them. That helped me to quantify what I had learned in the qualitative interviews.
I did a whole bunch of Balsamiq Mockups to validate the main functions and features of the product, to validate what my product vision was fit the problems that I was hearing from customers, and made a lot of adjustments based on feedback.
Then we built a clickable mockup to validate that, in usability, that it was likely to fulfill the mission before we started building the actual product.
Today we've got an MVP out there with some existing beta customers, as I mentioned, and we're working on a second MVP based on the feedback we got from those guys.
But SalesProspex at NetProspex, this was a tool for this new breed of sales person that they sometimes called BDRs, or business development reps, that do nothing but make cold calls all day. Which is a tough job, and it's very time sensitive, very, "How many calls can I make today," oriented.
The process followed the same essential path, but a little bit differently. We had a bunch of BDRs in house at NetProspex. I was able to discover their problems by not doing telephone interviews, but by sitting with them and watching them do their jobs.
And watching them complain about things that were difficult for them. I was able to prioritize them, at least preliminarily, by having done enough interviews with our own BDRs. But I needed some external validation, because I was only really talking to the members of one team.
I started talking to other customers, and the managers of those customers' sales teams to find out if those problems I'd uncovered were as common as I thought. After doing a half dozen of those interviews.
It was clear that it was the same set of problems coming up over and over and over again. We didn't end up doing a large quantitative survey, it seemed unnecessary at that point. Instead of doing mockups, we went straight to the clickable prototype.
Because the vision for what the product needed to be was so very clear from those interviews, and from comparable products that were already in the marketplace, that there was an established niche that we could fit into.
We ended up skipping that third step of low fidelity mockups and going straight to the clickable prototypes. That helped us to validate usability really well with our internal team. Today, SalesProspex is the number two selling product for that company, NetProspex, it fits its niche very nicely.
For ATG10, this was enterprise software sold to large companies like Best Buy and Neiman Marcus and Restoration Hardware. We did not have an agile process at all. We had an 18 month development cycle between releases.
I really wanted to get customer input earlier in the process if I possibly could, so I started the way I did with these other projects, by doing customer interviews and trying to understand where those problems were, what those problems were.
We went on site with a whole team, a cross functional team of design people, engineering, architecture, product management and BAs, with a few of these large customers, to really understand how they did their work every day.
Then the way we validated that over the larger market was to look at the actual websites of people not using our platform with the knowledge of how their various different websites were related in ways that might not have been obvious to shoppers.
For example, you might not know that Converse is owned by Nike, that those two sneaker brands are the same company, and both of them use ATG underneath as their e-commerce platform.
What we were interested in was how similar or different were those sites and whether they were managed by the same team internally at Nike. Learning about how, ideally, from a business perspective.
Those sites should relate in terms of how they're managed and in terms of their functionality, helped us to craft our vision for ATG10, which was all about supporting our customers being able to support multiple sites on a single platform.
While I was there, we did a whole bunch of mockups for them of what the workflows behind the scenes would be like and how the data flow through those workflows would then result in what their online catalog would look like on their storefront online.
We did those in a lot of low fidelity mockups and they ate it up. It was real terrific validation before we'd written a line of code of our product vision. Then, in this case, we skipped the clickable mockups because our team at the time was using Flash.
We were able to build the actual Flash UI, not hooked up to a real back-end, and put that in front of them. It was a clickable mockup, a first attempt at the actual implementation and got a lot of positive and useful feedback from them that way.
The broad summary of that is that, depending on your situation, you might apply those four steps a little bit differently. If you feel confident that, based on your learning's from previous steps, that you know the answer to, say, step three, if pretty confident of that, skip step three.
It's all about learning the answers to the things you don't know, so move on to step four at that point. I can recap those four steps, discovery to market problems with qualitative interviews, number two, prioritize the problems with quantitative research.
Number three, validate your product vision with mockups. Number four, validate your concept for a minimum viable product and your road maps of what you're going to do after that with clickable prototypes or even early attempts at implementation.
Adam: Very good. We had some great questions from the audience, Bruce. You want to tackle some of those?
Bruce: Sure, happy to do that.
Adam: Let's talk a bit about that decision making process for the prioritization of features. Who's at the table, and what are you doing when, project managers, developers, designers, maybe they're not on the same page or they're disagreeing on what needs to be in the MVP.
Bruce: Right. That's a great question. I have two slightly opposing answers to that that really, they betrayed a tension of any group activity.
The first thing I would say is that product development is not a democracy. Maybe it's my bias as a long time product guy, but I've also worn other hats in engineering and in design and business development. But at heart, I'm a product guy.
Maybe it's my bias, but I feel that somebody needs to be the ultimate decision maker. Usually it's best if it's the product manager who is calling the shots on, this is the problem in the market that we are trying to solve.
And all of the other decisions need to be driven by whether they are helping us solve that problem better, faster, cheaper. On the other hand, it's not a dictatorship, either, not a pure dictatorship. Maybe the analogy is it's a constitutional monarchy.
Because a product manager who makes all of those decisions in a vacuum without seeking the advice and input and buy in of all the other stakeholders at the table within the company is a moron.
Because not only is he missing out on the opportunity to get other brains, other smart people and people with valid points of view and valid information that the product manager might not have, missing out on the opportunity to capture that and make the plan better.
But also, any plan, no matter how great it is, that everybody is not bought into is doomed to fail. Because the product manager can't do all that stuff, can't implement the product and launch the product and sell the product all by themselves.
They've got to have the developers and the designers and everybody involved, the QA people, the sales people, the marketing people, the finance people, all pointing in the same direction and going after the same goal.
The best way I have found to do that is to involve everybody early. Make it clear that it's not a democracy, but that everybody's opinion matters, everybody has a valid point of view, everybody will be listened to.
That it's the product manager's job to synthesize all of that into a plan that hopefully, if they do a good job, they can get consensus on.
Adam: Understood. You talked about and showed some great examples of these road maps that you speak of. Do you recommend creating separate road maps for the product development team and then maybe a different version for higher level management that sticks to the big picture?
Bruce: Yeah, that's a great question. In a perfect world, you would have a slightly different view of what you're planning to do of the road map for any stakeholder. As a practical matter, you don't want to have 75 different road maps, that's a lot of manage.
But there are a few classic approaches to road maps that, for broad audiences, that you want to differentiate between. For engineering, they want to know, what are we working on in the immediate term, this sprint or this quarter or whatever it is.
And how does it fit into the big picture. They want a fair amount of granularity and specificity. They want to know exactly what features, exactly when we're supposed to be delivering them.
That's a fair amount of detail. You don't want to be giving that amount of detail to other stakeholders outside the development team like executives, sales people, marketers or customers, for that matter.
The reason you don't want to give that much detail to all of them, there are a few reasons. One is, they can't absorb that much detail. They can't see the big picture. They can't see the forest for the trees. They're going to lose that big picture if you try to inundate them with lots of detail.
Secondly, and almost more importantly, you don't want to over commit. At the point where you're asking the engineering folks to meet some aggressive goals, you know that there's some amount of uncertainty about whether they're going to be able to deliver every last thing within the desired time period.
Because there's no being 100 percent certain of the future. You don't ever want to be in the situation of saying, "Yeah, I know we told you we were going to have that great feature at the end of the month, but we're just not. It's going to take an extra six weeks."
You don't want to tell that to the executive team, to the board, to the marketing team that's been planning on launching it at a particular trade show the week after.
To the sales team that promised it to a customer, to the finance team that said, "Well, you told us it was OK to sign a contract with a particular customer that was dependent on that feature." All of those things will get you into trouble.
Your road map for those outside the development team stakeholders needs to be deliberately vague about exactly what you're delivering and exactly when. That vagueness is OK, though, because what do those stakeholders really want to know?
They really want to know that you are committed to a course of action that they will benefit from, that you are building things in their best interest.
The CEO wants to know that you're adding value to the company. Sales wants to know that you're increasing your ability to close sales by adding value to the product. A particular customer wants to know that you are trying to serve their needs, and that you're trying to solve their problems.
What you can do is, you can put together a road map for those external stakeholders that is theme oriented, that is all about the benefits that will accrue to them over time. The way people see that is by telling them that you're going to tackle certain problems.
At a high level, your road map might be that we're going to solve problem X in the first half of next year, and we're going to solve problems Y and Z in the second half of next year.
Nice, high level time periods, and very vague as to exactly what features you're going to deliver that will solve those problems. If you've got a release coming up next quarter and you're pretty sure you know what the features are for that, give the high level theme and then say.
And these three features that will deliver on that theme. But then, for next quarter, where you have less granularity, we have less clarity about exactly what will ship, don't tie yourself down.
Give them the theme and when they ask you about the specific features, say, "Well, we're still finalizing them." Does that make sense?
Adam: It does. One of the overriding themes of the presentation is getting project managers to consider this idea of a prototype, to test their design ideas.
Say a bit about the fidelity there, Bruce. When is it OK to simply use a quick and dirty static mockup versus something that's clickable and has some interaction? How do you make those decisions?
Bruce: It depends on your goals. Earlier in the process where you are still trying to validate that your concept of what the product will do will solve the problems that you've identified in your early interviews, something static is fine.
Because it gives directional indication of where you're going. You're not going to show every potential interaction in the product. You're going to show some core key screens. Like in Reqqs, we have two things that the product is designed to do. One is to help you to prioritize.
And the other is to be able to communicate a road map of what's coming out when. I was able to do two very simple, static mockups in Balsamiq of those two screens.
One screen was a list view of everything that you need to prioritize all the different features that you might want to ship, and how you score them against your goals, so that you can figure out which ones are more important.
The other one was a road map. A timeline view of which of those things is coming out when. There's a whole bunch of interactions that allow you to get from one screen to the other in the real product.
Early on, I was trying to validate that these two core displays addressed the problems of prioritizing and communicating a road map that I had discovered in my interviews.
When I got people to say, "Yeah, that makes sense to me," after iterating on these quick, easy to change mockups a few times, that's when it made sense for me to take it a step further and do a clickable prototype that allowed me to simulate the full workflow to get from one screen to another.
And, for that matter, to setup for even the first screen, because some certain information needed to be gathered first about your goals before you could even populate that first screen with feature ideas. It's really about your goal.
Generally, you're going to go with the static stuff earlier and the clickable stuff later. Partly because the static stuff is quicker and easier to create and to scrap and start again if you got it wrong.
And, partly because earlier on, you're trying to validate high level concepts, and later on, you're trying to validate, "Now that we've settled on those concepts, what is the precise workflow? Can I get from point A to point B? And, does that workflow make sense?"
We discovered, for example, in the course of doing Reqqs, there was a lot of back and forth between that list view, what we call the Scorecard, where you do your prioritization, and a view where you put in your goals against which you prioritize things.
It wasn't a linear flow of you put in your goals and then you put in all your feature ideas to rank against those goals. It was very much a back and forth adjusting the goals as more ideas occurred to you, so we made it very easy to slide back and forth between those two views.
We didn't discover that with the static mockups. We discovered that with the clickable prototypes. We weren't even ready for that conversation until we validated the core screens with the mockup.
Adam: Bruce, in that first step of your four step process, discovering the problems that you need to solve. How are you getting people to agree to be interviewed?
Bruce: That's a good question. It's really easy if you have an existing, going business, especially in the B2B space, and you have customers.
You can go to your existing customers and talk to them about product improvements that you might want to introduce or new sister products that you might want to produce by asking them about their business, their job, their goals, and their pains.
Not by asking them about the products directly but by getting inside their head. You have a captive audience there. Generally speaking, my rule of thumb is if someone's job revolves around using your product or depends strongly on using your product.
It's going to be pretty easy to get them on the phone. They're invested, almost emotionally, in your product continuing to help them and getting better at helping them more. They're eager and usually flattered to be asked for product input.
If, on the other hand, you have a product that they only use once in a while, or it's not a core part of their job or their life, if it's B2C, or if it's a brand new product that they've never heard of or they know nothing about, you've got a hump to get over to get their interest.
I know folks who use monetary incentives, particularly, for consumer products. My friend who loves, for example, to put ads on Craigslist and offer people 25 bucks to get on the phone, they get hundreds of people responding to that.
You've got to give people a questionnaire to validate that they are qualified to be a potential consumer for your product. That works really well. In business to business, I've done it very much by networking, by going through my LinkedIn contacts.
For Reqqs, I go to product management events. Because, again, it's part of their job, even though they've never seen this product before, it promises to help them with their job, they're very interested in talking. Those are, at least, a few principle tips to follow.
Adam: Very good. You're left with this pile of observations and data, right? What do you do with it? How are you synthesizing all of that?
Bruce: The toughest part of that is not the quantitative survey data. That's usually pretty easy. You can get graphs out of SurveyMonkey and stuff like that.
The toughest part of synthesizing data, and where the question probably came from, is step one, the qualitative interviews. Those conversations are fuzzy. You, maybe, had them over coffee or over a beer, and, maybe, you didn't take great notes.
I would give you a couple of tips. One is make sure you take great notes. No matter what, find a way to take notes about all these conversations no matter how informal they are. Make those notes really detailed. As close to verbatim comments that the interviewee made as you can get.
After you've done 10 of them, you will forget who said what. You want to be able to pull out some directional and quantifiable information, some synthesis of your notes, by reading your notes from these 10 conversations all at once, once you're done with these interviews.
Figure out how you're going to take good notes. I type very fast, so I will bring my laptop to a coffee or do a Skype call and be typing while I'm talking to the person. Other people aren't comfortable with that.
Another thing you can do is bring a second person to the interview who will be the recording secretary, if you will. They will type as much of the conversation, get it down on paper, as they possibly can without participating in the interview.
You might reasonably ask are people weirded out by you writing down everything they say or your compatriot writing down everything you say, but I don't find that people really are stranged out by that at all.
I tell them routinely, especially if I'm taking the notes, because sometimes I have to pause a second to catch up. I have yet to have anybody be stranged out by that. I just say, "I'm just taking some notes as we go."
For the most part, people are flattered that you're listening and that you care enough to write everything down. So, let's make sure that you got the notes. How do you synthesize that information?
If you've got three or four pages of notes on every one of these interviews, and you've done 10 or 12 interviews, that's a lot of material to digest.
While it's not a statistically valid sample for projecting election results or exactly which feature is more important that which other feature, I do find that I will go through each transcript and extract themes.
I will notice that in the transcript of interview one, this person mentioned this problem of setting product priorities. Great, I'll make a note of that on a separate piece of paper or Google Doc.
Then, I go to the second interview, and somebody says, not in the exact same words, but they have a similar problem. I'll say, "OK, second hash mark for that."
Then, it turns out that six out of ten people mentioned this problem of prioritization. I make a list of the problems that came up in the different interviews, and I tally up how many times those problems came up. That starts to give me themes that emerge.
If I have problems that only came up once, maybe twice, they might not be for real. But, if I have problems that came up in a significant fraction of the interviews, three, four, five, six times out of ten.
Then I think, "OK, there's a theme there. There's some kind of a theme around setting priorities that's hard for people." That's the first stage of synthesis, and that allows me to go to the quantification stage where I'll ask about the relative importance of the ones.
That rose up above the noise. The ones that had three to ten mentions. I can do that with problems. I can do that with the role that the person is in, so that I can start to segment my audience. I can do that with the size of company or vertical.
I can start to pull out any sorts of themes that are either problem-oriented or descriptive of the user and their situation, and I can go and quantify that in a survey afterward.
Adam: Say a bit more about the testing. Specifically, we had someone wanting to know how you can be confident when you're only showing your design to a small number of people?
Bruce: Yeah. Good question. I read this fascinating article by Jakob Nielsen, a few years ago, where he said that in user testing, you don't need nearly as many test cases as you think you do.
It seems to make sense from a statistical point of view that you would need hundreds. The minimum sample size for a reasonably accurate poll, say, in politics, is 380 or something like that. We don't have time to do 380 in person, half hour interviews or user tests.
Nobody has time to do that. What Jakob found, and I found that this resonated with my personal experience, was that you began to see repeating patterns somewhere around six to eight interviews. Somewhere around that number, you began to get diminishing returns, in terms of learning anything new.
I don't know how that holds up statistically. It's probably not projectable over a large audience. But, for directionally setting...This theme is resonating with people, or this theme is not. This is a real problem, or it's not a real problem. You get that really quickly.
The latest thinking, a friend of mine who's in the UX trade shared with me, is that they get good results with as few as five interviews in a given market segment or a given type of user. After that, they see diminishing returns.
That's the first half of my answer. I would say you don't need as many interviews as you think you do. The second thing I would say is where you have a broad audience where statistical validation to find out the size of a market or the frequency of a given problem in the market exists.
You can do a survey. There's a couple of philosophies around doing surveys. You can't come right out and ask anybody in a survey, "Would you buy X or not?" You can't get a realistic answer on that direct of a question.
What you can get a realistic answer on is "Is X more important to you than Y?" "Are you more likely to buy if we have feature X than feature Y?" That comparative thing. People will give you a reasonable answer. They have no reason not to. You tend to get good, accurate data about that.
If you've done your qualitative interviews to uncover the short list of problems, you can do that quantitative follow-up to figure out which problems rise to the top, in terms of priorities. We did that with Rex. I uncovered 10 most common product manager type problems.
Then I put up a user-voice forum and asked people to vote on which problems were more painful for them. The two things that were on the top became the core of the product.
Adam: Dan had a question about applying your process to something that's a more mature product. How do you put this in play for something that's rather than MVP, possibly the next release?
Bruce: That's similar to what we did to ATG. As I mentioned, that was the tenth major release of ATG, this enterprise software product. We have mature product with a long development cycle. All of the principles that we have talked about in terms of interviewing in statistical allegation.
And then mark ups and clipable prototypes and testing your way from concept to release, apply to new features or new versions of existing products, as well as they do to completely from scratch start-up things.
Consider whatever version you have of your product as there with the prototype and try to do a gap analysis based on interviews that you do with customers between what they ideally need in terms of solving their problems and what your product does today.
Whatever is in that gap is probably what you should be developing. That seems like a reasonable hypothesis at any rate. You can validate that with a static markup.
You can say, "All right, I think we know what fits in that gap between what it does today and what customers are telling us their problems are." Let me do a static markup, put it in front of customers and say, "Does that solve your problem?"
Keep innovating it and trying different versions until you hear back from them, "Yeah, that's also my problem. Can I have it tomorrow?"
Then go from there to a quick built prototype about how it would more neatly integrate with your existing product, how it fits in to the work flow of your existing product for people to try it there.
One thing that's some of the big outfits everybody is familiar with like Google and Facebook to an Amazon is they go through that same kind of cycle.
But instead of showing people mark-ups, they put something into the product because they have so much traffic coming. They can put something experimental into the product and expose 0.2 percent of the traffic to that new thing.
Even if it's not fully baked yet, they can put in, what's called, the fake button test where they put a button that says, "Click here to get some new feature or the try-out or data."
Then when you click on it, it either shows you nothing or you get a 404 error, or it shows you some early prototype version of it.
What they were trying to learn at that stage was whether anybody would click on the button at all, whether there's a value proposition that's worth taking it in further. It's possible to do a lot in an existing product to test and learn your way to what you should be adding to it.
Adam: Bruce, let's leave our audience with some resources. Where can they get some more details on road mapping?
Bruce: Sure. Thank you for asking. If you go to my blog, it's productpowers.com, I've got a whole series on what I call the dirty dozen road map, road blocks. There's a whole series of articles on problems people have on road mapping, challenges they face, and what you can do about them.
It links to some resources and other places on the web as well where you can get more information. You might also want to take a look at my slide share, slideshare.net/BMmccarthy, B-M-M-C-C-A-R-T-H-Y.
I've got a number of presentations from appearances that I've done at product camp and other places about road mapping in particular.
Including...somebody asked earlier about different formats of road map for internal versus external. There's one presentation called road map formats that has four different ones for different situations that people might be interested in.
Adam: You've mentioned your product Reqqs number of times and you want to offer up access to the beta. How do people get that?
Bruce: You can go to reqqs.com, that's R-E-Q-Q-S.com. There's a sign up form for being on the mailing list. People who are on the mailing list are going to be the first ones invited when we open the beta a little further.
Also, you can always email me at bruce@reqqs.com if you're desperate to be in currently the private beta that allows people to do prioritization.
Adam: Bruce, this was awesome. Thanks very much for spending a bit more time with us.
Bruce: My pleasure, Adam. It's always great to help out with product people and the Reqqs people.
Adam: To our audience, thanks for listening in and for your support of the UIE Virtual Seminar Program. Goodbye for now.