Two-Track User Experience (UX) Research: The Long Game
When I began work on Rally Software (formerly CA Agile Central) team as the new User Experience Researcher, we didn’t have a user research practice, so I set about creating one. I had a strategy, and for a long time, I thought I was executing that strategy with pretty good success. Only recently did I realize that I had failed. In this post, I’ll lay out my original strategy and what made it seem like a good one. Then I’ll describe a better way — specifically, creating two tracks for user research, one that focuses on the immediate needs of our product organization and another that anticipates (and guides) their future needs.
My strategy (and why it was wrong)
At Agile Central, we talk about user experience (UX) design as a 4-stage process. (Caveat: we don’t exactly believe there are four distinct stages, but they give us words to talk about design and the activities we do.) It looks something like this:
I wanted to create a culture around user research at Rally. My original strategy for doing that was to begin at the end stage, Measurement, and move backwards towards Discovery. That may seem counter-intuitive but there were a few reasons for that approach:
- Validity: As a researcher, it’s much easier to answer questions towards the end of the UX process than the beginning. E.g., “Does this solution work for users” versus “What do users need?”
- Opportunity: We were building features, but we didn’t really know if they were working. As an organization, we needed to get ahead of that. Once a feature ships, it’s much harder to redirect engineering resources to fix any issues that crop up. Maybe even more important, every failure burns user experience capital with our users.
- Value: We needed to show the value of UX research to our stakeholders. And the sooner, the better. I’ve found that the value of user research is more directly felt in the latter stages of design. You can provide specific recommendations on improving functionality, quickly see the results in the product, and measure the effect they have on behavior. In contrast, the impacts of good Discovery research can be profound. It’s just not as easy to draw a line directly from a research finding to a change in the final product. It’s even harder to measure the impact of those research findings.
To implement this strategy, we built up a practice for refining and measuring our designs which included usability testing, beta testing of new features, in-product feedback mechanisms, and a sequence for evaluating the UX team’s prototypes at ever-finer levels of fidelity. So after a couple of years, our UX research process had become part of our UX design process. To wit:
I felt good about where we were, and the journey that we were on (and I still do feel good about the practices we established during that time.) So why, in my introduction to this article, did I describe that journey as a failure? Here’s why:
As a team, we were having trouble finding the amount of time necessary for quality discovery research. We were sprinkling it in whenever possible, but unlike the other aspects of our research, it wasn’t baked into our methodology. Why was that? Among the reasons:
The tyranny of the present. There’s always an immediate need to research something that directly impacts users. As I noted earlier, it’s much easier to see the effects of your research efforts on user behavior in the latter stages of design and development. It’s that squeaky wheel that gets the grease. These short-term needs keep us from investing in the long-term research.
Changing product roadmap. Our Product team works hard to put together a stable product roadmap, but no matter how hard we work, there are always unforeseen events that force changes. (I don’t think this is a unique symptom of our organization.) Because these changes are unanticipated, we, as a UX team, are wholly unprepared for them. That means we have to scramble to put together designs. And to move quickly, we have to rely more on our assumptions than on research findings. This becomes another example of the tyranny of the present, the immediate need trumping the long-term play.
How to get ahead? (i.e., there’s a better way)
Since then, we’ve changed our approach a bit. Unfortunately, there are always going to be unanticipated changes to the product roadmap, and there is always going to be the tyranny of the present. With our new approach at Rally, the way we are getting ahead is by playing the long game. For every Initiative that we work on, our UX researchers still work on the immediate needs of our design and development teams -AND- we spin off some of our effort into a strategic research track. It looks like this:
At Rally, our initiatives are most often built upon a feature set in our existing product, and that feature set is designed to solve a problem for a user. The tactical work involves building the best solution to that problem that we can. The strategic work involves re-evaluating that original problem space. Who is the persona facing the problem? What value are they seeking? What are the user’s needs in the context of the problem space?
Why does this work?
When we spin off a track of strategic work, we’re making a bet. We’re betting that what we learn will inform future work. It’s a leap of faith, but it’s a self-fulfilling one. if we do a good job of defining the characteristics and needs of our users , then we’re in a great position to guide the next generation of work.
Even more, I don’t think I’ve ever worked on any problem space only once. For a variety of reasons, we end up revisiting research that we did in the past. Sometimes, that’s because the feature development was put on hold. Sometimes, we find that the world has changed around us, and we need to revisit what we thought we knew. And sometimes it’s very intentional: revisiting previously defined problem spaces and value propositions is good organizational behavior. An organization that builds something awesome, then sits back and lets the cash flow in, will die. We need to keep improving our solutions or we become irrelevant.
When we do it right, future initiatives are guided by our strategic research and often influenced by what we built as a product of our Tactical track. It looks like the following
We know that Strategic Discovery Work is critical to our process; yet at the same time, we’re faced with fire drills, changing customer needs and the tyranny of the present. Discovery should still be as much of a part of the Tactical Track as possible. However, because it is so important and all-too-often neglected, we are trying to bake it into our process. Interestingly, one early finding is that the strategic research doesn’t exist in a vacuum. When we set up a Strategic project, we’re clear that the goal is not to directly inform the tactical research; we don’t want to be a slave to the present needs. However, we have found that strategic research always plays an informative role. So our process really looks more like:
Is it working?
So far, our strategic work has enabled us to really dig into the needs of our users, agnostic of our solution. It’s a long play, and thus far we’re seeing it drive the next generation of work. Of course it’s not all roses and lollipops — we;ve learned several lessons along the way, and several aspects of the process that we need to iron out. For instance, for the researcher, the context switching between tactical and strategic work can be costly and confusing. Further, different tracks mean we have to determine how best to inform stakeholders which track we’re in at any point in time. We’re also experimenting with “swarming” on the Strategic track by leveraging UX researchers outside of our organization. That has led to a challenge of resource management and a cost of bringing the outside resources up to speed on the context we’re researching, which can be a project in-itself. Of course, with any new process, challenges are not unexpected, and we’re moving forward by iterating on the process every time we roll it out.
As I mentioned before, when we create a strategic research track, we’re placing a bet that we’re going to guide and predict the research needs of our organization. The whole concept of a separate research track is also a bet: is a separate strategic research track going to help us build a better product in the long term? The results are yet to fully come in (it’s a long play, after all), but so far, the bet seems to be paying off.