I’ve started a bit of a discussion within Adaptive Path on the subject of design research, and my concern that designers are flocking to research as a way to not have their work so coldly scrutinized. I just wrote up an email to an internal list trying to capture my thoughts, and have realized that it’s to muddled even for the AP blog, so I’m posting it here, because, well, I look to my readership to help me figure out when I’m making sense and when I’ve lost it.
I think it goes without saying that I support user research. I love the insights that we’re able to develop, and I do believe it often leads to superior designs. I have no desire to cast it aside.BUT, what I am reacting to is a trend I’ve been witnessing.
I talk to a lot of designers, both in-house and in the community, and a common thread of late is how much they love research. With a very definite implication that they’re less interested in design, and instead want to focus on how research frames the problem of what to design. I even get a sense that many feel they have grown beyond design, that they’re kind of pooh-poohing the craft of design, that they want to focus on more strategic concerns now.
My concern is that I think this desire to shift toward research is often motivated by research’s lack of accountability. Design is easy to criticize, easy to test, easy to measure. It is relatively straightforward to determine a design’s success (even if that success is determined subjectively by the key stakeholder). Research on the other hand, is hard to criticize, hard to test, hard to measure. The results of research, typically some understanding of the audience and a plan that takes advantage of those insights, aren’t held up to such scrutiny.
I fear this emphasis and idolatry of research, for two reasons. First, it loses sight that research is simply a means to an end — that end being to deliver results. Second, if research doesn’t demonstrate explicit value, it becomes a target for line-item- removal, particularly when things get rough.
Perhaps a tangent, but I think a revealing one, comes from a discussion I had with Jared Spool, when I interviewed him for our website a bit back
PM: What are typical mistakes, or misguided notions that you see when observing others engaged in user research?
JS: I think probably the biggest thing is not understanding the difference between observation, inference, opinion, and recommendation. Those four things are quite distinct and independent of each other. And if you don’t realize there’s a difference, you tend to muddle them up, and then things get very confusing.
In the full transcript, Jared defines them essentially as:
- observation: what you saw (the user clicked a link)
- inference: why you think something happened, usually because of causality (the clicked a link because they thought it would take them where they wanted to go)
- opinion: a statement about the situation based on inferences (the link has a confusing label)
- recommendation: how to change the situation to achieve a goal (the link should be renamed as ______)
You pretty much have to twist Jared’s arm, or give him lots and lots of money, to make recommendations. Because he doesn’t want to give recommendations unless he’s pretty certain they will lead to the desired change. Which means he needs to collect *tons* of data to give him that confidence. This might seem awfully reductive, but I think it is key, because it makes his research findings *accountable*.
I’m going to leave it at this for now. I’ve probably already blathered too much.