The SpoolCast with Jared Spool

The SpoolCast has been bringing UX learning to designers’ ears around the world since 2005. Dozens and dozens of hours of Jared Spool interviewing some of the greatest minds in design are available for you to explore.

Episode #158 Lou Rosenfeld - 8 Better Practices for Great Information Architecture A Virtual Seminar Follow-up

February 3, 2012  ·  20 minutes

Listen Now

Download the MP3

The goal of any site is for the right audience to find the right information. But beyond your actual content there are many things that can cause findability issues. These tend to be unanswered questions about your primary audience and whether or not you’re satisfying the need of that audience. Good information architecture can help guide your design decisions so that your users can effectively engage with your content.

Show Notes

Lou Rosenfeld offers up suggestions in his virtual seminar, 8 Better Practices for Great Information Architecture: Closing the Findability Gap. Lou believes information architecture offers long-term strategic value, and is more inclusive than some people may think. There wasn’t enough time to address all of the question during the seminar so Lou joins Adam Churchill to answer the remaining ones for this podcast.

Tune in to the podcast to hear Lou address these questions:

How do you use information architecture to solve findability issues? Share your thoughts in our comments section.

Full Transcript

Adam Churchill: Welcome, everyone, to another episode of the SpoolCast. Lou Rosenfeld recently joined us for a virtual seminar entitled, 8 Better Practices for Great Information Architecture: Closing the Findability Gap. Now, the seminar yielded lots of good questions and comments, and we decided to have a follow-up conversation that we could make available as a podcast for you.

Now, Lou’s seminar spoke to new opportunities for information architects that add significant value to projects. We’re fortunate that Lou gives us a lot of time, and he’s graciously offered to come back and tackle some of the questions that he thought we could re-address from the seminar.

If you didn’t listen to this particular seminar, you can get access to the recording in UIE’s growing User Experience Training Library. There’s presently 80 recorded seminars there, all great topics from speakers like Lou from the world of Experience Design.

I wonder if that’s any coincidence that the two seminars Lou presented for us happened to be numbers 50 and 75 in our arsenal. Nice milestones for us, and with one of our favorite speakers. Hey Lou, welcome back.
Lou Rosenfeld: Thanks, Adam. I guess you’ll get me scheduled for number 100 pretty soon, right?
Adam: That would be awesome.
Lou: Excellent.
Adam: So, Lou, for those that weren’t with us in November for your presentation, can you share an overview with folks?
Lou: Sure thing. This sort of came out of a bit of frustration that I’ve felt the last couple of years that a lot of people see IA in a very limited sense, and don’t see it offering much long-term strategic value. I don’t think anything could be further from the truth.

So what I tried to do was at least map out eight directions that I called “better practices,” because I don’t think there are any such things as best practices in a field where nothing can ever be perfect. You can only make things better. But I laid out eight, and I’ll go through them really quickly right now.

One is just getting better at doing diagnostics, and I spent a lot of time talking about the Zipf Distribution, which is basically a rule of many sites that a little goes a long way. Things like, a few of the search queries that people do on your site account for a huge proportion of all search activity. So a handful of queries needs to work well in order for search overall to work pretty well. Or, a handful of your documents are the ones that most people are going to be accessing, or are going to be accessing far more than any of the other documents.

So really not worrying so much about the long tail of the Zipf curve, but the short head. And once you have a sense of what that short head is, you can start working on smaller problems that, when you solve them, go a long way. And I proposed something of a very simple rinse-and-repeat process for constantly diagnosing small things that have big impacts, and correcting for those and doing those on a regular basis.

That was the first one, maybe the most critical one. The second one, simply having better evidence, more balanced evidence. I trotted out one of my favorite diagrams, Christian Rohrer’s diagram of the landscape of user research, which breaks the methods we all know and love into four quadrants along two axes. One axis around attitudinal behavioral data, and the other around quantitative versus qualitative data.

What I’m simply suggesting is to be very careful not to do all of your research in one of those quadrants, to have balance across those four quadrants so that you have enough different blind men looking at the elephant and trying to get at true insight.

I talked a little bit about advocating on behalf of the long-term and the need to create anchors, things like missions and vision statements and elevator pitches, to counteract what many of us are dealing with in the trenches, which is constantly changing plans and directions, often due to ripple effects from management turnover and just things that are constantly reactive mode. We need anchors to stabilize our work so that our designs don’t go off the tracks and our teams don’t go off the tracks.

The fourth one was some thinking around measuring engagement and really looking at how we might do a better job of developing new metrics and ultimately better and new KPI around things that don’t have to do with clear cut conversions. How might we start thinking about developing metrics around engagement, around authority, around orientation?

Really around the stuff that I call “the metrics of in-between-ness,” the things that are beyond, again, those just sort of basic conversion measurements that we’ve been doing for years. Because there are more to our sites than the conversions. There are all kinds of other things that need to have to happen in order for people to have a good experience.
The next two, the fifth and sixth, are really areas of information architecture especially that people don’t think about and aren’t investing nearly enough, and in which there are fantastic opportunities. Those two areas are better contextual navigation within our deep content, and better search, especially across silos.

A lot of people think of IA as top down navigation. They talk about IA and search, which is wrong. IA includes any kind of finding. So I proposed a bunch of ideas around investing in contextual navigation, specifically things like content or domain modeling, as well as some ideas around improving search, especially taking advantage of opportunities both in how we allow people to enter searches through the search UIs that are involved both initially and through refinement, as well as the design of search results.

And then the last two, seven and eight. Seven was combining design approaches effectively, basically looking at opportunities to have better hybrids of what robots will do for us, things like search engines, that can do certain things really well, and what humans can do for us manually, things like best bet selection, and how you might put these types of things together in more effective ways than we typically do right now.

Finally, the last one, number eight, was around just remembering that things change, and that your design, your information architecture specifically, must respond to those changes. Looking at things like seasonality as a driver for how we present and organize information, how that type of thing can be formed by data. Looking, again, at how things are constantly changing in terms of users’ needs, and being able to respond effectively to those changing needs.

So that’s my nutshell version of those eight better practices for findability.
Adam: I love that concept, the metrics of in-between-ness. That was a great part of the seminar for me. Louise wants to know if you have any special considerations or approaches in regards to this closing the findability gap, in regards to her effort looking at their company-wide Intranet?
Lou: That was a really good question, and I hope I do a better job with it this time. We’ll see. Intranets are kind of an interesting animal in that so much of what they’re there for is to help connect people with expertise and to be part of a broader ecosystem. In other words, most intranets fail because there are easier ways to move information around, primarily things like email and sending documents as attachments, and overriding the hope for benefits of putting a canonical version of any particular document in one place that everyone can find.

Core information architecture is one of the reasons people bypass intranets and move information through other means that are less effective when it comes to things like version control. So they’re a slightly different thing. They’re part of these bigger systems, and there’s also other parts of these environments that go beyond HTML, things like CRM systems and so forth.

So, intranets, still, you’re going to find a lot of the same diagnostics are useful. You’re still going to see a Zipf Distribution for things like what content is being used most frequently. But you’re also going to have to look in other places, like are there certain types of our staff directory or our CRM system that are being used most frequently? Are certain tasks that span those different technologies, and how can we make those bubble up to the surface?

So the same things really apply, except that information architects who work on intranets are even more challenged in terms of integration. It’s not just integrating content, it’s integrating systems that house different types of content. It’s also integrating people and making sure that the systems don’t get in the way of the actual human connection. Because, again, a lot of times people want to find other people in their organization who have some sort of expertise, and that’s the killer app of the intranet.

So, again, I think a lot of the same things apply, but there’s often more silos to deal with, more fragmentation, and that creates just a bunch of new challenges for information architects. Also, a lot of great opportunity. So, if you are confounded and pulling your hair out of your head, I would flip it around and say, “Hey, you’ve got a great opportunity facing you.”
Adam: Kristin wanted to know, how many audiences a website can reasonably handle?
Lou: Well, I wish Kristin was on the phone, because I’d make her define reasonable. Reasonably, I’ll hazard a guess of three to five, and even that might be generous. My thinking goes back to Zipf, again, this idea that maybe there’s one or two audiences that are hugely critical. And then there are secondary audiences, maybe even tertiary that don’t get that same level of treatment, but get some form of treatment. So if you’re an academic website, those audiences might be students, people considering applying and considering being students, and staff and faculty.

So that’s three or maybe four audiences, and you have to really consider what their common needs are for each audience. What does each audience want? What are the tasks they need to accomplish? What are the things they need in terms of information needs that need to be satisfied? That’s sort of the high-level treatment where you might invest a lot of manual effort for each of these audiences.

But then, I think secondary audiences for an academic website might be the media, it might be alumni. It might be academics from other institutions. You may not have the resources to scale up so much manual effort for those folks, but you can still give them maybe a lighter treatment. Maybe less customized information for each audience, but maybe a single page that basically gives the lay of the land of the web environment for each of those audiences.

And then maybe tertiary audiences, I can’t just really think of off the top of my head, but those folks, you don’t do anything for them other than give them the straight robot-handled forms of access. Hey folks, we don’t know who you are. We don’t know if you’re that important, but you can use our search engine and you could use our very basic site hierarchy to navigate the site. Good luck, and let us know if we can help you. So that kind of tiered approach is what I think makes sense.

Now, the math, you know, is it one audience, two audiences that gets that Rolls-Royce treatment, is going to be very much dependent on how many resources you have at your disposal. So it all becomes an issue of scalability. But at least if you tease it into different tiers, now you don’t have to treat every audience the same way and feel the pressure to give every audience the Rolls-Royce treatment.
Adam: Luke wants to know if you draw a distinction between engagement and involvement. In other words, the example he offers up is that, he may interact with a power company’s site very intensely because he’s upset, he’s got no power, but he’s not necessarily engaged. Can you just say a bit about that?
Lou: Well, yeah, I think that’s a great example. I think what I had said in the presentation was, our goal ultimately is to design not experiences, but design for engagement. In other words, to give people an opportunity to engage with us in conversation and dialogue, and to feel ownership of that dialogue. In other words, not have our web environments just be sort of this way to project a one-way monologue to people, but to give them a way to talk with us. So we listen as well as we talk, and as much as we can to give them a sense of ownership about conversation, about the service itself, and what we may be providing.

Luke’s example is great, because it’s like well, my power company makes me angry. So I’m very involved with their website, but not in a positive way. Well, again, I would look at that as an opportunity to create something positive. I know as a retailer at Rosenfeld Media every time we have a negative customers experience, and there’s not that many, but when we do we can usually win people over and give them a sense to at least make them happy, to take lemons and make it into lemonade.

But ultimately, what can we do when we have their attention? Can we help them? Can we give them something? Can we give them something that might give them a reason to come back in a way that makes them feel like they’ve dealt with a human, and they’ve been listened to and that they have certainly a better impression that might pave the way for a future engagement?

The power company, you know, if you look at that example, yes, who would ever want to engage with a power company? Well, most of us probably would. If we have a better sense of being listened to and engaging in a dialogue with the power company, we might be willing to let the power company know that there’s a problem that might be affecting the community, and might be willing to be on the lookout for issues that would be helpful to them.

We might be willing if they give us the opportunity to report on their level of customer service, how helpful, how gruff are the people they send out into the field? What would we like them to do in terms of alternative energy? They may be doing lots of surveys, but what about direct feedback? Hey, I would be willing to put solar cells in my roof, and I would be willing to spend this amount of money to do so. Those are the kinds of conversations that those companies aren’t very good at having, but probably really would benefit from.

So even the power company should be — not can be — but should be designed for engagement. And often a negative interaction might be the doorway to a longer-term and more positive form of engagement.
Adam: Lou, one of the things that came up during the seminar I thought was fairly valuable, I wanted you to say a bit more about it, and it was this. That with web analytics alone, we know something’s not working, but we may not know why. With UX alone, we may know why but not necessarily know whether it’s working or not.
Lou: That’s right. This goes back to the second point I made about doing balanced user research, and having essentially a balanced set of evidence to use to drive your decision making. I mean, ultimately that’s what we do is we’re making decisions and we need evidence to make those decisions well. That’s what design is ultimately about, and when we have a balanced set of inputs, a balanced set of types of research, what that does is, if it’s balanced we not only get a better picture, but the sum is greater than the parts. We get just better insight overall at whatever problem we’re trying to solve, whatever we’re trying to accomplish.

So, I think there’s a lot of interesting dichotomies in the work that’s done inside most organizations that haven’t necessarily put together things like web analytics and user research. In fact, I believe I presented a slide about all those different dichotomies in the presentation people are welcome to look at. But one of them is, what versus why? So here you have all these people in one part of the organization. Maybe in one silo, maybe they’re associated with Omniture, whatever it might be. They’re the web analytics team, whatever you call them. But I have all this really rich stuff that describes what is going on based on behavioral data.

Now, if you’re on the user research team and doing task analysis, which is a totally different thing, might you want to know something about the common information needs that come out of web analytics to influence the type of task analysis work you’re going to pursue? Because that’s expensive work, and it would be really good if you had some foundation that was based on behavioral data to help you shape that agenda for task analysis.

So there’s a nice kind of complementary nature of, hey, you know, we user researchers, we’re really good at figuring out things are they way they are. We can do this sort of attitudinal research. We can talk to people, we can observe them, we can have them think out loud. But what are the good questions that we should be checking out with users in doing those studies? Well, can we go back to the data to help shape that agenda? It’s just, again, one example of how these things can come together so that the sum is greater than the parts.

And there’s a whole host of not only examples, but more importantly, opportunities. I think the organizations that figure out how to combine what are currently siloed and all these organically-evolved pockets of research and put those together in ways that are optimized from making insights are going to ultimately make good decisions. And that’s really what I’m trying to get at there.
Adam: Well, this was awesome, Lou. Thanks for circling back with us.
Lou: My pleasure.
Adam: For those listening in, thanks for your support of the UIE Virtual Seminar Program.