After a year of work with a select group of enterprise companies to refine Pinpoint, now seems as good a time as any to talk about the ideas behind the product. To keep it conversational, we conducted this interview-style, with our head of marketing turned chief questioner. What follows has been edited for length and clarity.

What was the genesis for Pinpoint?

Nolan Wright (Chief Product Officer): It came from pain we experienced ourselves in building software companies. As an engineering organization gets bigger, you feel more and more blind. You feel unable to answer basic questions. There are lots of charts being produced, but they don't really tell you a story. In some way, this is triggered by all the source systems out there. All these systems — Jira, GitHub, NewRelic — come about to solve a particular problem, which is great. The downside is: you have all these various systems. Each one has just a piece of the puzzle. You have a piece here, a piece there, and there's no easy way to put those pieces together to form a coherent picture of organization performance. How do you unearth it and synthesize it in a way that makes business sense?

Jeff Haynie (Chief Executive Officer): At the executive level, you're always trying to align the strategic objectives of the company — and the budget around those strategic objectives — with the day-to-day activities of the development teams. And when you find they aren't aligned, it's not often clear why. Somehow, the planning inputs and outputs get skewed.

Why has measuring the performance of software engineering been so difficult?

Jeff: I don't know! Being able to show how well the organization operates, and what it contributes to the business… engineering has just lagged here, at least as compared to other parts of a company (sales, marketing, finance, etc.). And I say this as a long-time engineer. I think it's a culmination of things. Software actually did start out in a more measured way. If you think about the kind of industrial-scale, enterprise software we used to build — Microsoft Project and function point counting, and so on — we actually started out in a much more formal way of thinking about measuring the software pipeline and delivery.

And I think it failed. We didn't get to the outcomes we wanted because we spent all of our energy in planning meetings talking about what we were going to do instead of actually doing it. It failed, and that created sort of this backlash of culture. People said, "That doesn't work, it's a big waste of time." So Agile comes along and necessarily puts a flamethrower to all that bureaucratic process-and-artifacts work. But what we did is we threw the baby out with the bathwater.

Parallel to that, software's eating of the world meant that developers became more important than ever, with all of the attendant perks and benefits needed to recruit and retain great people. And that's created a bit of mythology around development — you know, it's fundamentally a creative endeavor, which it is, but then by extension: "Oh, it can't really be measured." The other parts of the business didn't get that kind of special treatment. Imagine a head of sales telling the CEO that performance measurements might scare people off!

Nolan: Another complicating factor has been the lack of consensus around what should be measured. Just about everyone has scars from some past attempt to build comprehensive KPIs or metrics. With software, the desired outcomes aren't always as clear-cut as they are with something like sales, where revenue is the measure, or marketing, where it's all about leads. Having consensus and clarity on the outcomes is obviously everything — that's what drives what you decide to measure.

What was the "eureka" moment behind Pinpoint?

Nolan: Well, it started from the simple recognition that we have all these source systems that carry the actual record — the tickets, the code, the deployment data — of what's been done, and being done. We can derive a much truer picture of organizational performance from that, and we can do it without the usual human intervention to query and connect and massage the data into more readable form, or without asking teams to work differently, or to swivel-chair over to some other system and input what they've done.

Jeff: There's no behavioral change required by the teams doing the work. We all know how well that would go over…

Nolan: The next part was constructing the signals that we felt to be the most indicative of people and team performance. With that in place, you can use data science — and ultimately, machine learning — to do correlations and comparisons, which leads to better data interpretation, which then graduates to prediction: being able to suggest corrective action before performance starts falling off in a given area.

Not to go down a rabbit hole on machine learning, but I think machines are actually well suited to solve some of these traditional problems of software delivery. Take project estimation. Humans are actually terrible at estimations. There's too much emotion in it. We've been burned by something, so we overestimate, or we crushed the last project, so we underestimate the next one… But machines are really good at looking at patterns of historical data — how long a given team has taken on a given kind of issue, of a given size and priority — and using that to forecast the most probable delivery date. It doesn't replace human intelligence, of course. But it's a really important supplement.

Jeff: More broadly, think about the way something like CRM changed the way businesses thought about a customer and the selling process. It essentially gave the business a unified data model for all things customer. I see the same opportunity for software: arriving at a common way of talking about what the engineering organization does for the business, and then shaping or reshaping how the work best gets done.

Get the data science behind high-performance teams.