Understanding Visualization Genres
Forgive me: this is a bit of a rant. To make sure it’s a good one, we’re doing it in five parts.
Where and how will this visualization be used?
This question drives every choice made when designing a visualization – from the design of the chart itself, all the way down to the choices of how the data architecture needs to be arranged. The creator of the visualization must make different choices depending on the use case and context.
I think of this in the same way I think of “genre” in writing. You wouldn’t mistake a memoir for a news article or a technical manual – they use different ideas of what to include, different writing styles, even different choices of words. Even though all three might be written with clear, grammatical language, their goals are different. A news article strives for emotionless facts, while a memoir might be more informal. Different occur in different contexts.
From a reader’s point of view, understanding the genre of a piece of writing will be a critical key to understanding how to interpret it.
The same is true of visualizations. Different visualization genres suit different purposes. The choice of visualization genre – the purpose, audience, and context of the visual – helps us think about appropriate visualization techniques.
I’d identify three major genres of visualization, each of which comes with its own questions:
Exploration: I want to discover new insights about my data.
Presentation: I know the answer and I want to share it with others.
Monitoring: I know the questions I want to ask, and check them from time to time.
From left to right: a python notebook showing Exploration; Hans Rosling showing a data presentation; a dashboard showing data monitoring
Understanding the genre of a visualization is incredibly important. A designer will make very different choices if they expect the question they are asked to have a factual answer — or whether they are instead presenting a visualization to be featured on the front cover of a newsletter.
Yet somehow this question gets skipped over far too often. At Microsoft, colleagues would ask me to help them learn about a dataset. They would be disappointed that the result was unlikely to be a novel data representation with a cool 3D animation. Instead, I’d present them with a data exploration in Excel or Python, neatly identifying insights and clarifying results.
We’ll come back to my colleagues’ disappointment in the last entry. In the next few, I’ll go a bit deeper into each of these genres, exploring their unique characteristics, challenges, and best practices. By recognizing and embracing the concept of genre, we can figure out what to build, what to expect, and who will use our charts.
Accelerate Product Adoption with the Diffusion of Innovations
In my last entry, I talked about how I used “Capability Maturity Models” to better make sense of interview results. In this entry, I’d like to explore how the Diffusion of Innovations helped us think about a direction for product strategy.
Moment.dev asked me to help define a clear product direction: the bases of the tool were in place, but how would we convert that into users? We felt that if we had a clear story about who would use Moment, we’d be more able to help focus the tool. In other words, a clear adoption strategy would shape product design.
There were many directions we could go. One strategy might seem good because it looked exciting to the first customer at a company; a different strategy might seem good because it could lead to very powerful usage patterns in the medium term. To decide how to move forward, we needed to juggle many factors: how did we picture it being adopted at a customer’s site? How quickly did we need users to see results?
We needed a way to keep these tradeoffs straight.
The Diffusion of Innovations
I’ve been a fan of the Diffusion of Innovations – both the book and the theory – since Barry Wellman introduced it to me in graduate school. The USDA had originally funded much of the Diffusion of Innovations line of research in the 1950s, trying to figure out how to help farmers modernize their methods. The book breaks down the attributes of innovations to help understand which ones are likely to be adopted, and the attributes of the people adopting them. You might recognize the book from the language about “early adopters” and “late majority,” which became popular from that research.
The other part of the book identifies five different attributes of an innovation:
Perceived advantage: can the user tell that the innovation works better than the status quo?
Complexity: how hard is the innovation to use?
Trialability: once I start using the innovation, can I turn back?
Compatibility: how much does the innovation require me to change about the way I work?
Observability: can other people see that I’m using the innovation?
“Observability” is the term used by Rogers; and is entirely distinct from the sense of Observability used in the distributed-systems monitoring community.
Using the Diffusion of Innovations at Moment.dev
How do we apply this to Moment? Moment.dev has an infrastructure combining text and apps. This means end-users can edit pages, adding interactions to them that can run as code. We saw in the last blog entry that Moment supports gradual automation, where subsequent users can improve documentation into automation.
Moment for Interactive Runbooks
What if we provided a tool to help make instructions for common internal tasks, such as runbooks for incident reviews or internal instructions for managing tasks. As ops teams dealt with incidents, they might choose to improve the pages that they used often. Teams might even be able to review which pages were being executed often to know what needs improvement and to better locate their internal pain.
We loved the idea that some of the runbooks would gradually improve into live notebooks. For example, if a step was “confirm that the deploy had completed”, that might be improved to a live indicator in the page that would show whether the deploy had in fact completed. If a second step required the user to find a process and restart it, that could be replaced with a button.
Perceived advantage is pretty good, once the runbook is established – our interviews showed that some users had some challenges with runbooks becoming obsolete. An automated runbook would attract use, and hopefully be kept up to date.
Complexity would be a challenge: users would need to learn the Moment way of creating live examples.
Trialability would also be a challenge: the default state of Moment was just documents, so users wouldn’t see the advantages of Moment quickly.
Compatibility would be fairly high: we could build an automatic upgrade that would help users get from their current runbooks to our system.
Observability was very good: users would see the runbook and be able to see any live examples that had been created.
Enhancing Runbooks - Kubernetes Upgrades
Knowing that these are the strengths – and weaknesses – what could we do to ease adoption? Are there any design enhancements we could make to runbooks to help them be more triable and to reduce complexity?
We thought about providing an out-of-the-box solution that would be easy to use from the first click. We want to find a task that we could prepare a runbook for – and that people could use with minimal configuration. Optimally, we’d find a task that:
Is annoying to carry out
Has some steps that can be automated.
Will help scaffold users to explore Moment and learn more about how to create their own automations
The idea is that these three steps would address the weaknesses. Having a packaged solution would help improve the perceived advantage; Trialability would be improved by helping customers get started rapidly; and complexity would be greatly reduced when users only had to do a moderate amount of configuration.
We began to consider some tasks that might fit well in that space. Upgrading Kubernetes clusters seems to be a candidate: we had a few reports that had talked about what it took to properly test a Kubernetes upgrade. They had reported the process required lots of tracking of progress through steps, including collaboration with other teams, and asking each of the teams to run through a set of testing and acceptance tests.
This begins to sound like a good task for Moment. If we could build a tool that made it reasonable to run a Kubernetes upgrade in Moment, then perhaps this could help with trialibility and complexity. We would have to carefully design the system to be ready to support this set of tasks.
A Design Path Forward
To be clear, we don’t want to build a version of Moment that can only do Kubernetes upgrades. Rather, we want to make sure that this starting scenario works well out of the box, so that users can start with this, and begin to support their other needs, too. We also need to consider whether there are features that would support not just this scenario, but others, too. For example, what features can we add to Moment to best support coordinating large projects like this?
This more specialized tool offers some great opportunities – but it comes with a trap. Users might decide that Moment is only a tool for Kubernetes upgrades. In gaining easier adoption, we might discourage users from being as creative as we want to allow them to be. It will be important to design the next steps to ensure users can grow and develop in the tool.
Harnessing the Power of Frameworks
The Diffusion of Innovations framework proved invaluable in organizing the complex questions facing our product strategy at Moment. It allowed us to compare strategies, understand trade-offs across multiple dimensions, and rapidly identify testable product directions. This approach can streamline decision-making and accelerate innovation within any organization.
If you're looking to enhance your product development process and make strategic decisions with greater clarity, I'd be delighted to explore how these frameworks could benefit your team. As a consultant, I bring expertise in applying these methodologies to real-world products and can help you unlock your product's full potential.
Let's connect and discuss how I can help your team achieve its goals.
Making Sense of Automation with Maturity Models
Ever spent hours talking to users, only to end up feeling you've got nothing but a bigger pile of information? Raw interview data is a treasure trove, but doesn't give you the roadmap. That's where analysis frameworks come in – they help you turn those messy stories into actionable insights.
In this blog entry, I’m going to introduce a less familiar, but highly valuable analytical lens – the Capability Maturity Model (CMM). And so I’ll tell the story of I got to help a modern startup by unearthing government measures of software design from the late ‘80s.
Popular User Research Techniques
Once you've collected your user stories and done a round of coding to coalesce and consolidate your data, what comes next?
I like to think about what classes of information I’ve gotten.
Do your users break themselves into groups, based on shared needs and behaviors? A Persona approach can help.
Are users hinting at a deeper layer of goals then the surface tasks? Dg deeper to uncover the “Job To Be Done.”
Are you finding a “right way” to use your product – and now you’re evaluating whether they’re getting there? That’s getting close to locating a North Star.
Each of these techniques has its strengths – you pick the one that’s most appropriate for the problem at hand.
The right analysis approach depends on the kind of insights we're seeking. In a recent experience, CMMs provided unique value in assessing users' process maturity. Let's explore how it works!
Background: How a Startup's Automation Needs Inspired This Analysis
Moment Software is building a tool that makes it easy to embed code in documents. The tool targets infrastructure teams that manage internal processes and runbooks, and write internal software for their companies. One theme I repeatedly heard at Moment was that we were building a tool to help create automation in documents.
At first, I didn’t get it: documents feel like the definition of static material, while applications are dynamic. Documents live in document repositories; code is stored in source directories.
Over time, I realized the two of those aren’t as far apart as I might think. Documentation shows how to carry out a series of steps. Automation packages those steps together. This became clear during the interviews: if a task were infrequent, someone would document how to do it. As the task became more common, people would write scripts to automate it. In fact, I began to hear some interviewees complain about a whole backlog of “to-be-automated” tasks!
How Documentation Grows into Automation
Here’s an example of how documentation evolves into automation, drawn from my time at Honeycomb.
We had a documentation page on resetting your development environment, including how to reset your local database. It listed the tables to drop and configuration information.
Periodically, someone would need that reset. They would follow the instructions, and sometimes even improve them. Here’s how it evolved.
It started as a set of instructions (“from the devdb database, drop the tables users and queries”)
Someone added a code segment for each step (“use devdb; drop table users; drop table queries”)
Someone else created a script, which they checked into the scripts repo, and changed the documentation to just saying “run scripts/droptables.sh”
Finally, yet another person incorporated it into the main control interface. They moved the script and changed the documentation again (“click the button labeled ‘reset DB’”).
There was no top-level mandate, just many iterations of users making their lives a little easier. Note how the code moved a few times – from documentation (where users would cut and paste it) into the script folder and then to the control interface.
Capability Maturity Models
I was looking for a way to analyze how this process of documentation turning into automation evolved – and to pin down why people might get stuck along the way. That's when I remembered an older concept called a "Capability Maturity Model" (CMM). It emerged from 1980s software studies, but it became a powerful tool to analyze processes like this!
Essentially, a CMM describes an organization's skill level in carrying out a process in five maturity levels:
Initial: It's been done at least once (probably with a lot of improvisation)
Repeatable: Someone documented it; others can follow the steps
Defined: Clear procedure, maybe with some basic automation bits
Capable: The process is streamlined, heavily reliant on systems
Efficient: Fully mature, may even run automatically.
This framework gives us a clear vocabulary to describe where an organization has focused its efforts. The droptables script above is a classic illustration of moving from stage 1 to stage 4.
We can apply CMMs to lots of processes. A few years ago, Liz Fong-Jones and I talked about how Honeycomb had a “deploy on green” philosophy: a goal that passing tests (and a peer review) should leave a developer confident enough to deploy a code change to production. At Honeycomb, deployment was at maturity level 4: one click would trigger a continuous-integration action and start the entire process flowing.
In contrast, our rollback process was much less mature – maybe level 2, just documented. Since our deployments were so reliable, if we needed to change something, we usually fixed forward. Rollback wasn't a frequent need, so we hadn't invested effort in making it as smooth.
(I’ve learned that some organizations use CMMs as a way to abuse — or discipline — product teams by trying to turn them into a dashboard. That is a very different use, and I’m not sure I agree that’s a valuable use of the model.)
Mapping Maturity to Code Locations
During interviews, I asked people where they'd look to figure out a process. Here's the pattern I noticed:
Level 1: "The Wild West" – think Slack scrollback or quickly edited wikis – stuff that changes fast and gets lost easily.
Level 2: Processes get moved to docs repositories or organized knowledge bases (like Notion) for more permanence.
Level 3-4: Documentation AND Code Processes now straddle two worlds, their descriptions get more formal, and there's code involved.
Level 5: Fully Automated Processes might even run themselves via control panels.
The Problem: This system is chaotic! Users I interviewed complained about wasting time searching, sometimes even finding outdated instructions before realizing there's a better way to get the job done. Have you ever had that experience?
This insight is where the CMM analysis started paying off – it showed us a clear path for Moment to make a real difference.
Putting CMMs to Use
Now that we had the CMM framework, we saw the underlying problem clearly: these maturity levels exist, but users get stuck jumping between them!
Here's where Moment makes a huge difference: by allowing code to be embedded in documentation, we make that evolution less jarring. Anyone editing a page can automate bit by bit, without big rewrites or needing to switch between totally separate systems.
The Moment approach gives teams flexibility. In Moment, a whole range of processes can coexist. Simple notes, a partial script that carries out one annoying step, or a fully automated script – they all can work. Users will naturally evolve things that are used frequently can naturally evolve towards higher maturity levels.
The CMM didn't just highlight possibilities; it showed us potential challenges, too. Stage 4 users worry about things like version control and editing history – Moment needs to be able to answer those concerns. And as something hits Stage 5, there needs to be a smooth "graduation" path from Moment into broader automation workflows.
Conclusion
The concept of “Capability Maturity Models” gave us a powerful lens for understanding how documentation transforms into automation. It put the challenges users faced with those transitions into clear focus – a huge help when thinking about both Moment's marketing and design goals! We could communicate how users at different maturity levels would all benefit from our product.
Speaking of those challenges... are you finding it hard to pinpoint where your users get stuck on their workflow journey? That's where bringing in my expertise makes a massive difference. Drop me a line, and let's see how insights from users can level up your next project.
And stay tuned for my next blog! I'll dive into a completely different framework that helped Moment chart its strategic course.
How to Ask the Right Question
Have you ever wrapped up a user interview feeling like you didn't really learn what you needed? The key to great interviews isn't technique alone – it's asking the right questions. I've spent years refining how I approach user interviews. I’d like to discuss common pitfalls – and what you can do to ensure your interviews get you the information crucial to your decisions.
Before an interview, define your core goal. What critical information do you need? Who will ultimately use those findings? Interviews meant to aid sales differ vastly from those meant to guide product design. Having a clear purpose leads to asking better questions.
When I did a recent project with Moment Technologies to shape product strategy, we had to revise our initial questions substantially. Our first questions were likely to lead users to give us misleading answers. Let's break down some common pitfalls:
Overly specific questions
Sometimes, we have a feature or a product direction in mind when we’re starting interviews. Asking about those specific features is not likely to work. Asking hypothetical questions – “would you pay for this feature"? – rarely gives meaningful answers. Users aren't great at predicting their future selves, and you and your interviewee almost definitely have different ideas of what a product with that feature might look like.
For a better approach, reframe the conversation around pain points. If you know what your product is meant to help with, you can learn how users interact with that problem. For Moment, we started exploring the concept of "toil" – DevOps lingo for those annoying manual tasks that aren’t quite worth automating. We learned a lot about our users' daily challenges, which let us start figuring out how to tune our tool for their work.
Confirmation Bias and Leading Questions
Beware the trap of asking what you want to hear – and then hearing what you expect! Our preconceived ideas can seriously derail interviews. Maybe you're hoping for positive feedback on a pet feature, so you accidentally phrase questions to steer users that way. It's surprisingly easy to slip into without even realizing it. These sneaky biases mess with your results.
I’ve been happiest with the results of studies where any answer is a surprise. It’s hard to steer users wrong when you genuinely just want to know what they think.
Using Your Own Vocabulary
Watch out for that jargon! In the interviews I carried out with Moment, after stewing in terms like "toil" and "automatable" for a week, we nearly forgot those are our internal lingo. Turns out, some users had totally different definitions, which started to skew our results. We were able to get back on track, but it’s important to look out for this one. Try to understand the world from your users' point of view. What language do they use to describe their day-to-day problems?
Overly personal personas
Persona methods can get us into our users’ heads and can give a rich sense of how your product can fit into their work. Some persona presentations go deep on rich and descriptive persona examples: “Jill lives on a homestead farm with three chickens”. The trap is when teams dive too deep into irrelevant questions. Before you ask Jill about her egg-laying situation, focus on how work fits into her life. Is she at home or office-bound? Does she have set hours, or does her work come at irregular intervals?
Let me share a story from my time at Honeycomb. We wanted to understand our users’ work schedules and when they picked up the product. We did get some stories about people’s personal lives — but mainly because they were explaining how they’d used the product on a plane or while picking up groceries. The real insight of our interviews was that we found two distinct core behaviors: people who used Honeycomb actively during the development process and those who turned to it only after something broke. These were very different mindsets – and they sparked some great conversations about what features could support each work mode.
Wrapping It Up
Interviewing users can be tricky, but the payoff is huge! Everyone loves talking about their work – and you might be surprised at the gold you uncover, leading to products people genuinely love.
Transforming those raw insights into a product roadmap is another skill. Do you need help crafting an impactful, data-driven strategy? That's where I come in! Drop me a line, and let's see how user interviews can supercharge your next project.
Speaking of those Moment interviews – the next step is to organize the raw material I pulled out of those conversations into insights. In the next blog, I'll dive deep into applying analysis methods to turn those interviews into powerful actions!
Growing into Production
In my entry on “Measure, Design, Build,” I talked about the prototyping process: how we get from data, users, and an interesting problem into a workable prototype. What’s the next step?
Optimally, this process becomes a step in organic growth: bringing in the additional skills that we need to make the new feature real, one step at a time.
Growing the SLO project
When Liz Fong-Jones and I created the Honeycomb SLO feature, for example, we kicked it off with three days of intense meetings in a Seattle co-working space. (By the way: if you ever have a chance to get to work with Liz? Absolutely worth it.) We looked at existing user feedback, finding limitations in the trigger product. We sketched possible UIs. We mocked up the core algorithms and equations — first in a spreadsheet, then a Python notebook. Those first steps got us far enough to learn about a lot of unstated assumptions in the algorithms. We were able to use the sheets to figure out how to model prediction, and to figure out what parameters the algorithm needed.
Then we started to build it into a product. We had three steps in mind:
Be able to run dogfood SLOs on Honeycomb data
Be able to hand-hold a few intrepid customers through SLOs
Any customer can create an SLO
The first thing we needed was a back-end engineer, who could start building out the query caching machinery. (This turned out to be rather hard! I wrote about some of our lessons — and how we spent $10,000 in a day.) I had enough enough coding prowess to start to build out the front-end, but anther engineer hopped in to to start wiring us into the production database. Liz put on her infra hat and started figuring out how we’d need to arrange databases and servers to support our expected user loads.
We now had four people on the SLO team. As we crossed the first threshold and had an internally-available tool. We encouraged the front- and back-end engineering teams at Honeycomb to incorporate SLOs into their practice, and they started adding SLO alerting to their on-call monitoring and handoff cycle. Liz stepped away, driving the growth of DevRel at Honeycomb.
The project continued to grow: a product manager to help manage internal usage and prioritize the growing todo lists. Design resources. Another front-end engineer to polish the graphs and get BubbleUp more fully incorporated. Step by step, we moved toward release, and the team grew to fulfill our needs. We brought in a Solutions Architect to help us coordinate Phase 2, hand-holding our first customers.
By now, the design was stable. I was less and less useful, as the engineers discussed query policies, API changes, and challenges incorporating various libraries. I transitioned into writing documentation, working with customers, and started up my next project.’
Growing into product
It feels like I’ve seen two different patterns for staffing projects. When you understand the scope of the project, it makes sense to assign a team at the start — say, “a designer, a PM, two devs” — and build out the feature with the team.
But other projects are harder to shape and scope. This pattern seems like a useful one for projects that still have some risks: we bring in resources on parts that we have either successfully de-risked, or where we need the expertise to build out the next step. When building SLOs, we couldn’t have used a full team at the start: there were too many dependencies to work out, too many pieces to put in place.
I love watching a project grow — and I love when I get a real expert on a topic outside my domain who can pick some early decision and sand off the sharp edges, turning it into smooth, well-operating code. And , always with some sadness, I love that moment where I realize the fledgeling product can now fly without me, and its time to start on the the next phase.