The Process Selection Matrix: A 2x2 for Enlightened Product Teams

Stephen P. Anderson
9 min readAug 22, 2024

--

Ugh. Every week. Another debate about process…

I’d like to propose a different perspective on our discussions of product design & development processes: What if we stop debating THE process (and deviations from it), and instead think about different product team processes for different types of projects.

[This thinking is my extended version of “It depends.”]

Choosing the right process — and there are 4 categories of processes I’ll identify — depends upon (1) risk, and (2) value certainty.

Context:

I’ve worked now in a couple environments where the ideal design process really took hold. Double Diamond. HCD (Human-Centered Design). Whatever you call it. Things are built with upfront and iterative customer research. Teams all use personas. Customer journeys. Early user testing and validation. The whole shebang. Great, right? Except when it isn’t.

(And if you’re working in place where these human-centered processes haven’t taken hold yet — keep up the good fight! This article might not be for you…)

A few years ago, I wrote about generative features [WHICH HAS NOTHING AT ALL TO DO WITH GENERATIVE AI!] , and tried to lay out a case for at least ‘under-specifying’ features, if not intentionally slipping in some generative features. This kind of goes against the whole ‘specified use case’ approach that is the norm. You should definitely go read that post. But, that post begged the question: What kinds product/design processes would allow for, or encourage, this kind of heretical thinking? Or more to the point: Why would you build and ship something that’s not backed by customer research?

Here’s the challenge: We have lots of processes. And beliefs about how to design, build, and ship products. Many of these are variations on the same essential thing. A few of these are grounded in something fundamentally different. And there’s a bit of truth and goodness to most of these approaches. MVP. Lean. HCD. UX. Double Diamond. Scrum. Agile. Build / Release / Learn. Experiments. And yes, even Waterfall. Unfortunately, our conversations always gravitate toward converging on THE process, shared by all. One process to rule them all!This process improves upon that process.” To be clear, I have no issues with improving upon what’s come before, or debating the pros & cons of different approaches. What I do take issue with is failing to recognize when unique situations call for unique processes. When we go all in on ONE process, we create blind spots. And, we close ourselves off to ways of working that might be more situationally appropriate for that feature / idea / fix / project / whatever.

So how might we choose the best process for the situation? Let’s zoom out a bit. Introducing…

🥁

The Process Selection Matrix

Here I want to share a meta-framework I created a few years ago to help identify when to use which type of process. I won’t go into any details about the specific processes suggested — that varies with team needs. But, I will argue that there are, broadly speaking, four different approaches for four distinctive kinds of work.

Here’s the model, upfront:

A 2x2 matrix. The vertical axis goes from Literal Request to Imaginative Idea. The horizontal axis goes from Low Risk to Risky. This results 4 quadrants, with the labels of (1) Uncertain Value + Tiny Bets (Low Risk), (2) Uncertain Value + Big Bets (High Risk), (3) Certain Value + Tiny Bets (Low Risk), (4) Certain Value + Big Bets (High Risk).
I shall dub thee, ”Anderson’s Process Selection Matrix”

This is a 2x2. On this 2x2 we have two axes.

Horizontal Axis: Risk
The first axis is all about risk, from low-risk to high-risk projects.

Risk, in this case, is about a number of things: UX and tech debt, level of effort to complete, commitment (if you remove this later on, will people be upset), risk-tolerance of your industry (e.g. financial services vs social media), one-way doors vs two-way doors, etc.

Vertical Axis: Ideas
The other axis is concerned with the source of the idea, be that an explicit customer request or an “innovative” idea. And there are many degrees in here, between something customers are hollering for and that ‘crazy’ idea the founder/CEO just came to you with, ideas inspired by customer insights, and so on.

So… put these together and we can quickly frame things as:
• A high-risk request
• A low-risk request
• A high-risk idea
• A low risk idea

Building on this, we can start to discuss different ways to respond:

Let’s start with the HCD Way…

Animated gif from the TV show The Mandalorian that says “This is the way.”

High-Risk Requests

For high-risk requests, ‘normal’ HCD processes apply. Cue up images of a customer research and clarification process, where you flare and focus on both the problem and possible solutions. Up-front research also ensures you’re solving for actual (versus imagined) job stories. Through iterative design and feedback you clarify not only what to solve, but how to best solve that problem. There’s confidence, and sign-off, and things get built, shipped, then monitored (hopefully!) for validation and further learnings. This is the default design (and development) process discussed in many flavors all over the web.

Low-Risk Requests:

The problem with the typical HCD process is when it’s then used for very narrow, low-risk requests. ‘Hey, can you add support for bullets’ or ‘I need a faster way to X.’ In these cases, I’ve seen well-intending teams run these tiny requests through the weightier HCD process (now an unnecessary gauntlet!) described above. Instead, if there’s significant upfront confidence and clarity, it’s OK to make a quick release into production, then verify for satisfaction. You can (gasp!) ‘skip’ the normal process, to be a more responsive company. And improve throughout. I’ll repeat that again: It’s OK to skip big portions of the HCD process for low-risk requests. The trick of course, is being certain that something is indeed a small request — and not something more. It’s also the case that that ‘tiny’ request can open up a Pandora’s box of considerations, at which point it needs to be re-scoped and run through a more traditional HCD process. You often don’t know until you dig into the work. But, trust your gut. If the research and validation process seems to outsize the scope of work, and it’s low risk work, maybe that process doesn’t apply.

High-Risk Ideas

Okay, let’s go back to the high-risk side of the model, but let’s shift up to high-risk ideas. These are the crazy, ‘You gotta be kidding me… Huh, I’m not sure I can picture it… Wow, that’s mind-blowing…’ types of ideas. It’s hard for all but a few folks to imagine what’s being proposed. Accordingly, (high or low risk) would favor a “How might we…” or “What if…” framing over something like a job story or user stories. What’s the problem we’re solving for, when there could dozens, or none? That’s the point of a risky idea.

For high-risk ideas, the best thing we do is create (or play with an existing, comparable) experiential prototype. The really far-out ideas have to be experienced, to be understood. And experienced, to really spot what’s great and not so great about that idea. When we live in abstract sketches or high-fidelity screens and mockups, we miss out on the very nuanced experiential details that a living, dynamic prototype can offer.

Example: At Mural, we explored some pretty wild VR ideas. We needed a fast way to de-risk and validate which ideas were worth building in native code (a very lengthy and laborious process). Also, sketches and arm-waiving clearly weren’t sufficiently experiential. Fortunately for us, we could quickly prototype the experiences in Horizon Worlds, a kind of sandbox building environment with light coding possibilities. And by quickly, I mean 16 completely unique games/concepts built out in a matter of weeks! From this, we gathered a ton of insights, we were able to test assumptions, and we only moved forward with those ideas that had strong validation.

Note, with high-risk ideas, we’re not doing a bunch of upfront customer research nor seeking to heavily inform what gets prototyped. Instead, we’re leading with an experiential prototype — to spark conversation and feedback. The experience is the research. Given the great imaginative leaps required to envision unseen opportunities, we create something to react to. This is often limited to internal folks, or trusted customers who have an appetite for ambiguity and all things future facing, aspirational, and requiring imagination. We de-risk not through up-front research, but through experiential prototypes. [FWIW, this is most often the domain of labs or R&D groups.]

Which leaves us with… 🥁

Low-risk ideas.

(This, BTW, is the context for my musings on generative features — now would be a good time to read that article, for what follows to really make sense…)

The domain of low-risk ideas are where we’re not really solving for an explicit customer problem — we may not ever know the value or utility of the thing we want to ship. But, we can imagine lots of ways it could be used. Maybe.

This is where certainty will never be had — we need to embrace a quick release to production, then closely monitor for what happens next. And iterate, in response. This is the space for tiny, innocuous bets, bets that would likely die in a normal research process (“What’s the job story? What problem does this solve?”) but might explode in a myriad of spectacular use cases, yet to be imagined. This is the space for generative and underspecified features.

If it’s low risk, and fairly easy to just push into production, and you want to discover what real customers — at scale — might do with this, how this might help them, just do it.

Origin Story

I didn’t arrive at this overnight. I think pre-cursors to this model have been simmering for a long while, going back at least 10 years ago when I posted this donut graphic, asking ‘What’s less than MVP?’

A critique of MVP. 4 images, arranged left to right with a caption underneath each. First image: ingredients used to make a donut, with the caption “WTF? Insufficient Features. Incomplete.” Second image: A burnt donut, with the caption “Yuck! Poor quality compromises testing.” Third image:  A donut (no glaze?) with the caption “Minimum Viable Product.” Fourth image: A donut covered in sprinkles with the caption “Product.”

What’s ironic is that tweet was a reaction against the ‘everything is an experiment’ mindset, a mindset which has its own blind spots. Sometimes, there’s not enough the to justify the experiment (my donut visual). More often, I see experiment focused teams testing things that needn’t be tested — things better suited for the low-risk/request quadrant. Just build it. Or worse, I see the opposite: Teams are so invested in a ‘very-tiny-iterative-experiments-mindset’ that they’ve shut themselves off to the big bet risks, or riskier customer-backed solutions; these teams find themselves suffering a local maxima problem. While I love the ethos of experimentation, I get tired of the platitudes; many teams seem to strike out when it comes to practicing — pragmatically — an experimentation mindset.

🕐

Later on, I worked at company where HCD had really taken hold. The good? Lots of engineering teams were very customer-backed in their work. The problem? It took at least 9 months for the tiniest of features to ever get placed in front of an actual customer. Big, upfront research (and not the iterative kind) was the norm — until it wasn’t. You can probably guess what eventually happened: There was a predictable backlash against things taking too long, and the pendulum began swinging to the other extreme of “just ship it” (which, BTW, is the context for my post on Polarity Maps).

🕑

Fast forward to 2021… I found myself in another unique situation, trying to create space for experiments, but experiments that fit the upper left quadrant of the model (e.g. NOT customer backed). I found myself advocating for experiments that might unlock possibilities rather than refine desired business results. Less play it safe, more… experimentation?! That was a hard to pitch. At least it was, until I framed it in this matrix.

So, that’s it. That’s the model.

I’m sharing this, as it’s a way of thinking that has proven to be a helpful framing, whenever talks of this process or that process come my way. Should you go with Double Diamond? Reverse double diamond? Build / measure / learn cycles? Ship straight to code, with no user testing? My response: Why not a bit of each, depending upon (1) risk, and (2) value certainty.

Prior Art?

While I feel the Process Selection Matrix (sounds like a decent, if boring, name for this) is wholly original, the ideas are similar to, or informed by, the following sources:
• Eddie Obeng’s The Game of Projects
• Natalie Nixon, PhD’s WonderRigor matrix
• This post on variance spectrum
• Dave Snowden’s Cynefin framework
• David J. Bland’s Assumptions Mapping

--

--

Stephen P. Anderson
Stephen P. Anderson

Written by Stephen P. Anderson

Speaker, educator, and design leader. On a mission to make learning the hard stuff fun, by creating ‘things to think with’ and ‘spaces’ for generative play.

No responses yet