interview

Making Sense of Automation with Maturity Models

Ever spent hours talking to users, only to end up feeling you've got nothing but a bigger pile of information? Raw interview data is a treasure trove, but doesn't give you the roadmap. That's where analysis frameworks come in –  they help you turn those messy stories into actionable insights. 

In this blog entry, I’m going to introduce a less familiar, but highly valuable analytical lens – the Capability Maturity Model (CMM). And so I’ll tell the story of I got to help a modern startup by unearthing government measures of software design from the late ‘80s.

Popular User Research Techniques

Once you've collected your user stories and done a round of coding to coalesce and consolidate your data, what comes next?

I like to think about what classes of information I’ve gotten.

  • Do your users break themselves into groups, based on shared needs and behaviors? A Persona approach can help.

  • Are users hinting at a deeper layer of goals then the surface tasks? Dg deeper to uncover the “Job To Be Done.” 

  • Are you finding a “right way” to use your product – and now you’re evaluating whether they’re getting there? That’s getting close to locating a North Star.

Each of these techniques has its strengths – you pick the one that’s most appropriate for the problem at hand.

The right analysis approach depends on the kind of insights we're seeking. In a recent experience, CMMs provided unique value in assessing users' process maturity. Let's explore how it works!

Background: How a Startup's Automation Needs Inspired This Analysis

Moment Software is building a tool that makes it easy to embed code in documents. The tool targets infrastructure teams that manage internal processes and runbooks, and write internal software for their companies. One theme I repeatedly heard at Moment was that we were building a tool to help create automation in documents.

At first, I didn’t get it: documents feel like the definition of static material, while applications are dynamic. Documents live in document repositories; code is stored in source directories.  

Over time, I realized the two of those aren’t as far apart as I might think. Documentation shows how to carry out a series of steps. Automation packages those steps together.  This became clear during the interviews: if a task were infrequent, someone would document how to do it. As the task became more common, people would write scripts to automate it. In fact, I began to hear some interviewees complain about a whole backlog of “to-be-automated” tasks!

How Documentation Grows into Automation

Here’s an example of how documentation evolves into automation, drawn from my time at Honeycomb. 

We had a documentation page on resetting your development environment, including how to reset your local database. It listed the tables to drop and configuration information.

Periodically, someone would need that reset. They would follow the instructions, and sometimes even improve them. Here’s how it evolved.

  • It started as a set of instructions (“from the devdb database, drop the tables users and queries”)

  • Someone added a code segment for each step (“use devdb; drop table users; drop table queries”)

  • Someone else created a script, which they checked into the scripts repo, and changed the documentation to just saying  “run scripts/droptables.sh

  • Finally, yet another person incorporated it into the main control interface. They moved the script and changed the documentation again (“click the button labeled ‘reset DB’”).

There was no top-level mandate, just many iterations of users making their lives a little easier. Note how the code moved a few times – from documentation (where users would cut and paste it) into the script folder and then to the control interface. 

Capability Maturity Models

I was looking for a way to analyze how this process of documentation turning into automation evolved – and to pin down why people might get stuck along the way.  That's when I remembered an older concept called a "Capability Maturity Model" (CMM). It emerged from 1980s software studies, but it became a powerful tool to analyze processes like this!

Essentially, a CMM describes an organization's skill level in carrying out a process in five maturity levels:

  1. Initial: It's been done at least once (probably with a lot of improvisation)

  2. Repeatable: Someone documented it; others can follow the steps

  3. Defined: Clear procedure, maybe with some basic automation bits

  4. Capable: The process is streamlined, heavily reliant on systems

  5. Efficient: Fully mature, may even run automatically.

This framework gives us a clear vocabulary to describe where an organization has focused its efforts. The droptables script above is a classic illustration of moving from stage 1 to stage 4.

five levels of capability maturity models

We can apply CMMs to lots of processes. A few years ago, Liz Fong-Jones and I talked about how Honeycomb had a “deploy on green” philosophy: a goal that passing tests (and a peer review) should leave a developer confident enough to deploy a code change to production. At Honeycomb, deployment was at maturity level 4: one click would trigger a continuous-integration action and start the entire process flowing. 

In contrast, our rollback process was much less mature – maybe level 2, just documented.   Since our deployments were so reliable, if we needed to change something, we usually fixed forward.  Rollback wasn't a frequent need, so we hadn't invested effort in making it as smooth.

(I’ve learned that some organizations use CMMs as a way to abuse — or discipline — product teams by trying to turn them into a dashboard. That is a very different use, and I’m not sure I agree that’s a valuable use of the model.)

Mapping Maturity to Code Locations

During interviews, I asked people where they'd look to figure out a process.  Here's the pattern I noticed:

  • Level 1: "The Wild West" – think Slack scrollback or quickly edited wikis – stuff that changes fast and gets lost easily.

  • Level 2: Processes get moved to docs repositories or organized knowledge bases (like Notion) for more permanence.

  • Level 3-4: Documentation AND Code Processes now straddle two worlds, their descriptions get more formal, and there's code involved.

  • Level 5: Fully Automated Processes might even run themselves via control panels.

The Problem: This system is chaotic!  Users I interviewed complained about wasting time searching,  sometimes even finding outdated instructions before realizing there's a better way to get the job done. Have you ever had that experience?

This insight is where the CMM analysis started paying off – it showed us a clear path for Moment to make a real difference.

Putting CMMs to Use

Now that we had the CMM framework, we saw the underlying problem clearly: these maturity levels exist, but users get stuck jumping between them!

Here's where Moment makes a huge difference:  by allowing code to be embedded in documentation, we make that evolution less jarring.  Anyone editing a page can automate bit by bit, without big rewrites or needing to switch between totally separate systems.

The Moment approach gives teams flexibility. In Moment, a whole range of processes can coexist.  Simple notes, a partial script that carries out one annoying step, or a fully automated script – they all can work. Users will naturally evolve things that are used frequently can naturally evolve towards higher maturity levels.

The CMM didn't just highlight possibilities; it showed us potential challenges, too. Stage 4 users worry about things like version control and editing history – Moment needs to be able to answer those concerns. And as something hits Stage 5, there needs to be a smooth "graduation" path from Moment into broader automation workflows.

Conclusion

The concept of “Capability Maturity Models” gave us a powerful lens for understanding how documentation transforms into automation. It put the challenges users faced with those transitions into clear focus – a huge help when thinking about both Moment's marketing and design goals! We could communicate how users at different maturity levels would all benefit from our product.

Speaking of those challenges... are you finding it hard to pinpoint where your users get stuck on their workflow journey?   That's where bringing in my expertise makes a massive difference.  Drop me a line, and let's see how insights from users can level up your next project.

And stay tuned for my next blog! I'll dive into a completely different framework that helped  Moment chart its strategic course.

How to Ask the Right Question

Have you ever wrapped up a user interview feeling like you didn't really learn what you needed?  The key to great interviews isn't technique alone – it's asking the right questions. I've spent years refining how I approach user interviews. I’d like to discuss common pitfalls – and what you can do to ensure your interviews get you the information crucial to your decisions.

Before an interview, define your core goal. What critical information do you need? Who will ultimately use those findings?  Interviews meant to aid sales differ vastly from those meant to guide product design. Having a clear purpose leads to asking better questions.

When I did a recent project with Moment Technologies to shape product strategy, we had to revise our initial questions substantially.  Our first questions were likely to lead users to give us misleading answers. Let's break down some common pitfalls:

Overly specific questions

Sometimes, we have a feature or a product direction in mind when we’re starting interviews. Asking about those specific features is not likely to work. Asking hypothetical questions – “would you pay for this feature"? – rarely gives meaningful answers. Users aren't great at predicting their future selves, and you and your interviewee almost definitely have different ideas of what a product with that feature might look like. 

For a better approach, reframe the conversation around pain points. If you know what your product is meant to help with, you can learn how users interact with that problem. For Moment, we started exploring the concept of  "toil" –  DevOps lingo for those annoying manual tasks that aren’t quite worth automating. We learned a lot about our users' daily challenges, which let us start figuring out how to tune our tool for their work.

Confirmation Bias and Leading Questions

Beware the trap of asking what you want to hear – and then hearing what you expect! Our preconceived ideas can seriously derail interviews. Maybe you're hoping for positive feedback on a pet feature,  so you accidentally phrase questions to steer users that way. It's surprisingly easy to slip into without even realizing it. These sneaky biases mess with your results. 

I’ve been happiest with the results of studies where any answer is a surprise. It’s hard to steer users wrong when you genuinely just want to know what they think.

Using Your Own Vocabulary

Watch out for that jargon! In the interviews I carried out with Moment, after stewing in terms like "toil" and "automatable" for a week, we nearly forgot those are our internal lingo. Turns out, some users had totally different definitions, which started to skew our results. We were able to get back on track, but it’s important to look out for this one. Try to understand the world from your users' point of view. What language do they use to describe their day-to-day problems?

Overly personal personas

Persona methods can get us into our users’ heads and can give a rich sense of how your product can fit into their work. Some persona presentations go deep on rich and descriptive persona examples: “Jill lives on a homestead farm with three chickens”. The trap is when teams dive too deep into irrelevant questions. Before you ask Jill about her egg-laying situation, focus on how work fits into her life. Is she at home or office-bound? Does she have set hours, or does her work come at irregular intervals?

Let me share a story from my time at Honeycomb. We wanted to understand our users’ work schedules and when they picked up the product. We did get some stories about people’s personal lives — but mainly because they were explaining how they’d used the product on a plane or while picking up groceries. The real insight of our interviews was that we found two distinct core behaviors: people who used Honeycomb actively during the development process and those who turned to it only after something broke. These were very different mindsets – and they sparked some great conversations about what features could support each work mode. 

A notebook organizes interview notes.  The pages are well-organized, and concepts are well-separated

Wrapping It Up

Interviewing users can be tricky, but the payoff is huge! Everyone loves talking about their work – and you might be surprised at the gold you uncover, leading to products people genuinely love.

Transforming those raw insights into a product roadmap is another skill. Do you need help crafting an impactful, data-driven strategy? That's where I come in! Drop me a line, and let's see how user interviews can supercharge your next project.

Speaking of those Moment interviews – the next step is to organize the raw material I pulled out of those conversations into insights. In the next blog, I'll dive deep into applying analysis methods to turn those interviews into powerful actions!