When people ask what we do, my answer—the so-called elevator pitch—goes something like this: We synthesize the work activity in software delivery tools, then apply proprietary AI to measure impact, anticipate risks, and unlock potential for engineering teams.

If pressed for a shorter version, I say: Pinpoint is an Engineering Performance Management platform.

The problem with the short version is the problem any company in a new category faces: nobody knows what it means. Engineering Performance Management? What kind of made-up buzzword is that? When Salesforce was starting out, they faced the same problem. They didn’t run around pitching a “Customer Relationship Management” platform—CRM didn’t mean anything.

But you have to start somewhere. You decide on a name for this new thing you’re doing, and then you start putting it out there. And gradually, if you’ve done the job right, the name starts to stick...

What it isn’t

With any new tech category, sometimes it’s easier to start by saying it by what it isn’t. You point to related or adjacent categories and say, “No, not that, and here’s why.” This is a shortlist of what Engineering Performance Management (EPM) is not:

  • A new flavor of DevOps
  • A new flavor of CI / CD
  • Value Stream Management
  • Product Portfolio Management
  • Code analytics (alone)

Each of these aims to reduce friction in the software delivery chain. Better collaboration, better automation, better clarity around how application and product bets align to business objectives. This is all good, essential work. Collectively, these things have accelerated the ways we move software from idea to production, and map the value created.

But these things don’t tell us much about how software engineering is performing as an organization. They automate the engineering machine, allowing it to run faster with fewer handoffs and bottlenecks, but they don’t tell us much about whether the machine is at peak performance—and if not, where and how it might get better.

Put another way, these innovations have helped to organize and automate the factory. We’ve graduated from siloed teams, siloed tools, and manual handoffs to a more connected, fluid, continual state of software release. But if the factory is faster and better, it’s hard to say where and by how much. In a lot of ways, the factory operates in the dark: ideas come in, software goes out, but what happens in the middle is a mystery. Process and tooling investments haven’t done much to help leaders answer the broader, business-centric questions that are second nature to most other departments:

  • Is our work good?
  • Where can we improve?
  • What’s our case for more investment?

To understand this, to make our factory “smart,” we need sensors and machine learning. This is where Engineering Performance Management comes in.

Engineering performance in 3D

Investments in automation tooling (ticketing systems, code management solutions, quality tools, CI/CD, etc.) have another benefit to yield. They also hold a wealth of data about the efficiency and performance of the software organization. By harnessing the raw activity data of the systems where engineering work happens, then applying machine learning, we can develop a real understanding of engineering performance—one that lets teams work without having to stop and report on what they’ve done, and which doesn’t require a separate army of data crunchers to assemble, manage and interpret results.

We think of Engineering Performance Management as operating along three axes:

AxesOverview

The X axis represents the end-to-end process, and the end-to-end systems, involved in building software. Coding is an essential component of course, but software engineering involves more than just code. There’s the work (and systems) to capture requests and ideas, to prioritize the backlog, and to manage its delivery; there’s the work of verifying that what we’ve built is good; there’s the effort of releasing software and optimizing it in production—the full DevOps cycle. For Engineering Performance Management to be worth the name, we need to harness the raw activity data from across all these efforts and systems.

The Y axis is the depth of performance intelligence we can glean, using machine learning, from the activity data harnessed. This starts with being able to compare, say, our performance across key signals from this quarter to last, or to compare one team’s or location’s performance to their labor cost. But comparisons like this don’t require ML. What ML gives is the ability to do considerably more advanced performance analysis. For example, to predict the percent of engineers likely to leave within the next six months—and then to recommend actions to reduce that turnover.

The Z axis show the people Engineering Performance Management should help. It should help technology executives better understand things like performance relative to cost; it should help managers run teams more effectively; it should help individual engineers better see and measure their contribution to team and business goals. In the same way that advanced analytics like sabermetrics came to be embraced by GMs, coaches, and players alike, so Engineering Performance Management needs to help every member of the organization find and unlock better performance.

These three dimensions are also useful for thinking about the vendor landscape. Over the past couple of years, there’s been a proliferation of companies focused on code analytics. This makes sense: there’s good performance data to be had by instrumenting git activity. The trick is, this information is useful primarily only to engineers and engineering managers.

Engineering Performance Management needs to illuminate the bigger picture. Its intelligence needs to be broad enough and deep enough that leaders can see the performance of the engineering pipeline from end-to-end:

EPMscope

No EPM without ML

For Engineering Performance Management, machine learning is more than just buzzword compliance. It’s essential for answering questions like:

  • How well do we deliver work?
  • Who are our best people?
  • What work is at risk, and why?
  • Which locations should we invest in?
  • Are we getting better over time?

Machine intelligence is how we derive the signals we need to get usable, actionable insights into performance.

Take risk analysis as an example. By scanning the entire work history of the organization, and examining the metadata associated with all the work items (including parent items, children, grandchildren, etc.), we develop an automatic prediction of the time remaining to complete the assigned work, based on an array of variables including work type and priority, as well as team-specific variables such as historic on-time delivery rate, Throughput, and Cycle Time.

By incorporating machine learning, EPM takes us a long way past the legacy realms of analytics and BI to understand and unlock performance. A much deeper intelligence becomes available. This includes automated diagnostics, the ability to see where technical debt is accumulating (and what the corresponding costs are), even to understand how effectively we onboard new engineers. Machine learning helps us turn raw data into action by helping us understand trends, predict outcomes and find the patterns that will help people to build software better.

Performance in language the business understands

Above all, EPM provides a way to describe our performance in language any business person can follow. The CEO isn’t likely to care about commit counts, or an explosion of inscrutable charts. When asked how we’re doing, we need to answer with data that shows how much we’ve gotten done, how fast we got it done, how good the work was, and what it all cost.

It’s past time for engineering to earn an equal seat at e-staff and board meetings, which means matching other departments in being able to explain what we’ve done, how we’re trending, and where we’ve contributed to the bottomline. This is what Engineering Performance Management exists to do.

Get the data science behind high-performance teams.