Use Case

Engineering Performance Metrics Dashboard.

How do you measure software engineering performance? How do CTOs or CIOs respond when someone else in the business asks how engineering is doing? Too often, the software organization is a black box, even for technical leaders.

The absence of clear metrics to measure engineering performance makes it difficult for everyone—engineers, managers, executives—to understand what works and what doesn’t. The lack of good metrics isn’t for want of trying. The problem is that most measurement initiatives collapse under their own weight. In pursuing the perfect KPI, we end up looking at the wrong thing, or too many things, or with metrics that don’t suggest a logical action, or that are too hard to derive. The result is more meetings, more spreadsheets, more custom queries… and less clarity.

There’s a better way.

Death to metrics

Pinpoint’s engineering performance metrics dashboard changes the story. We use machine learning to derive clear, actionable insights from the raw activity data in software build systems like Jira, GitHub, and the like. We have three criteria for any metric we surface:

It must be actionable. To be useful, the metric should suggest some kind of action. A metric fails if, on seeing it, the reaction is: “Okay, well... now what?”

It must be relevant. If a metric is too far removed from the goals of the company, there’s no reason to look at it. “Hours worked” is an (in)famous example of an irrelevant metric. It’s looking at effort, not outcome.

It must be derivable from machine intelligence. No matter how good a metric might be, if it requires a lot of manual effort to capture or calculate, it’s dead on arrival. Sifting thousands of data points to detect a performance trend? That’s machine work.

Metrics that meet these criteria are no longer “metrics” in the old sense—we call them signals. They cut through the noise of typical dashboards to show you what’s really going on.

An engineering performance metrics dashboard the business understands

Think of engineering as a pipeline: ideas enter on one end, and exit the other as quality software. What we want to know is: how much do we get through the pipeline, how fast do we do it, how good is what comes out, and at what cost?

At Pinpoint, we rely on signals to answer these questions. First, we build a view of all work and its current state. This view is filterable by team, timeframe, and work type (epic, story, task, bug, etc.) (1). In this example, the view shows work from the past quarter. We see the total number of issues in each primary work state—backlog, in progress, or closed—as well as the percentage change from the prior period (2).

Metrics for engineering teams

We then analyze the backlog and work activity to answer key questions, such as whether the engineering organization’s focus is matching demand:

Engineering performance metrics

For all work in progress, we depict see the current state of all work started, broken down by time spent in each more granular work state, such as In Review, Blocked, etc.:

Engineering metrics

When analyzing closed work, we derive a full performance evaluation of how, and how well, the work was delivered. There are five primary signals for this:

  1. Backlog Change shows how well you’re keeping up with demand;
  2. Cycle Time, which measures the average days from starting a piece of work to completing it, tells your speed;
  3. Workload Balance evaluates how evenly work is distributed across people, which signals how efficiently the pipeline is working;
  4. Throughput measures how much work you get done per person, per month;
  5. Defect Ratio tracks closed defects against created ones—that is, are you squashing more bugs than you’re introducing?

Metrics for engineering teams

True, most issues include code as an output, and code is clearly also something whose progress we should be able to see, which we address here. But for a topline engineering performance metrics dashboard—one used in executive and business conversations—we’re primarily concerned with issues. When it comes to explaining the state and caliber of work performed, issues are the common currency, the coin of the realm, among engineers, leaders, and stakeholders. Taken together, these signals demonstrate the end-to-end performance of your software pipeline, in language non-engineers can understand.

This last part is crucial. When it comes to understanding engineering performance, there’s an almost total lack of common language. Engineering has plenty of lower-level measures—story points, burndowns, lines of code, etc. But we have little to nothing that actually speaks to our contribution, in ways the business can follow.

Until now.