Signals Glossary

Insights that drive action.

Understand every Pinpoint performance signal, why it matters, and how it's derived, so you have a clear view of the actions that boost software engineering performance.

Average Days Late

Average days late for completed issues with due dates.Expand

Insights

Average days late for completed issues with due dates.

Why this matters

Average Days Late reflects how well a team is meeting its targeted deadlines. It is a measure of both planning and delivery effectiveness.

How to use

If Average Days Late is higher than target, it could be a sign that there are bottlenecks or inefficiencies that warrant investigation. Examine Throughput to determine if this is the case.

High Average Days Late could also signal that the team is overly-optimistic in the planning process or is not building slack into the schedule for the inevitable complication or addition of unplanned work. Consider using the historical Throughput as a yardstick to sanity check the level of work teams sign up for as they plan. Scrutinizing Sprint Health will reveal if scope control and/or excessive unplanned work is the culprit.

Target Range
-100% to 0%
Insights

Backlog Change shows the percent change in the number of open issues for the period specified. A negative figure means more issues were closed than opened; a positive figure means more issues opened than closed.

Why this matters

An increase in backlog is often a natural outcome of the planning process, as new work is defined and captured. However, teams whose backlogs only ever increase may suggest too much feature ideation against too little execution.

How to use

Keep an eye on any spikes in Backlog Change that occur independent of planning cycles, which could signal quality issues or a resource imbalance.

Steady change in backlogs over time (whether growing or shrinking) may also signal a need for resource adjustments. Growing backlogs may indicate additional team members are needed. A shrinking backlog may signal that a team has resource capacity that could be allocated elsewhere.

Target Range
0 to 100
Insights

To calculate this, we take the total code additions plus total code deletions and divide by the number of commits within the period selected.

Why this matters

Generally speaking, the fewer the changes per commits, the better. With fine-grained changes, root cause analysis and fixes are easier in the event of any problem. You have a more precise view into what changed, when. There is also much higher alignment between the description of the commit and everything it actually includes.

How to use

Guidelines of course can help encourage developers to favor smaller, more frequent commits, and to only include related changes in the same commit. But guidelines only go so far. This is where Pinpoint comes in. When people see that a best practice is more than just hot air–that the practice is actually being measured, and used to inform performance evaluation–the adoption rate is... remarkable.

Insights

Closed Issues accounts for all issues closed, regardless of whether development work was performed. Completed Issues counts only issues closed after development work.

Why this matters

Many issues may not make it to development, or they may not require the involvement of development teams. For this reason, Pinpoint makes a distinction between issues that were closed with development work performed–Completed Issues–and issues that were closed with no development work, what we call Retired Issues. Closed Issues represents the sum of both Completed Issues and Retired Issues.

Insights

Code Ownership evaluates the total lines of existing code a person has contributed across all repositories.

Why this matters

In addition to showing one important part of developer contribution, Code Ownership also helps to show a developer’s code base familiarity, and domain expertise.

How to use

Note that Code Ownership reflects existing, not total, lines of code contributed by a person. That is, as a person’s code is reduced or rewritten by others, those lines are removed from his/her Code Ownership total.

Also note that the Code Ownership signal ignores 3rd party modules and libraries. These are not counted toward a person’s Code Ownership.

Insights
Why this matters

Commits, alongside Issues Worked, helps to surface developer activity over time. To calculate commits, Pipoint looks at the master branch of every tracked repository.

Insights

Completed Issues counts issues that were closed after development work was performed. (This is as opposed to Retired Issues, which counts only those issues that were closed with no development work performed.)

To determine Completed Issues, we look for every issue that had at least one in progress state–meaning, presumably, that development engaged–and that was subsequently closed, with a resolution of done or fixed.

Why this matters

Completed Issues, together with Cycle Time, are used to determine Throughput.

Insights

Broadly speaking, a Contributor is any active member of the technology organization whose work is captured at least in part by the source systems (such as Jira, GitHub, etc.) that Pinpoint instruments. Primary examples include software developers and testers.

Insights

Cost is the annual labor cost of a person, as captured by their salary band. In the case of team cost, Pinpoint sums the annual labor for all people assigned to that team.

Why this matters

Cost may be one of the largest blind spots for software organizations. Macro budgets are set annually or biannually with finance teams, but the return on that spend is rarely properly understood. By surfacing the costs of teams, people and regions, and by comparing those costs to performance, Pinpoint allows leaders to run software like a business.

How to use

Pinpoint shows Cost and performance by person, team, and region so leaders can make better decisions about resource allocation. The region view is especially useful in evaluating the cost effectiveness of contract and offshore arrangements.

In the future, we will also surface Cost by issue, so software leaders and their business peers can have informed discussions about the best use of budgets.

(Showing capitalization, long a nightmare of guesswork for finance teams and their software peers, is on our roadmap.)

Insights

To calculate this:

  • For teams, we measure the time in days from an issue first being set to in progress, to the time it is completed. We exclude from this calculation any issues that are never set to in progress (for example, that go straight from open to closed), as these issues presumably didn’t engage development. Cycle Time also makes allowance for issues that are closed and subsequently re-opened. In these instances, the latest closed date is used.
  • For people, we sum of the total days in progress for all Completed Issues that person worked in the time period selected. This figure is then divided by the total Completed Issues that person worked for the same period. So, a person who works on five Completed Issues in a given period, and whose total in progress duration during that period is 50 days, would have a Cycle Time of 10 days (50 / 5) for the period.
Why this matters

Cycle Time is one of the key efficiency signals for software development. It measures the fundamental capability of a software team – namely, the timely completion of issues. Shorter Cycle Time indicates an optimized development process and faster time to market. Longer (or lengthening) Cycle Time may signal an understaffed team, or inefficiencies in the team or process.

Cycle Time also informs Forecast Work Months, which helps to drive more accurate, more predictable issue completion dates.

How to use

A word of caution in using Cycle Time to compare teams. If one team is building with the benefit of a loosely-coupled microservices architecture, while another works on a legacy application with a monolithic code base, there are bound to be differences in their Cycle Times. The objective should be for each team to improve its score over time.

For teams looking to improve Cycle Time, start by examining the workflow. The detailed view for Cycle Time shows the distribution of days across time to start (not included in the Cycle Time calculation), in progress duration, and verification duration. This will help identify bottlenecks like wait time.

It also helps to think smaller. Are teams composing user stories in the smallest possible increments of valuable functionality? This allows for quicker iterations at lower risk, since the amount of change is contained. Are the teams themselves as small as possible? Smaller, self-contained teams usually means faster cycles, higher quality and less rework. And as always, look to automate as much as possible – regression tests, provisioning, and the like.

Insights
Why this matters

Defect Density helps to reveal the quality of code owned by a team. It may also be used as a proxy for technical debt, helping to determine the relative cost to maintaining the code as-is.

How to use

Code bases with higher Defect Density may be in line for refactoring. The trend in Defect Density over time, as shown on the detail view, will help decide whether a code base is reaching a point where the costs (and risks) to maintain are outweighed by the costs to refactor.

Note: Some teams may not have an associated code repository. In these instances, Defect Density is excluded from the performance summary calculation.

Insights
Why this matters

Defect Rate indicates how much of a team’s effort was spent addressing bugs. A rising Defect Rate may be a warning sign that a growing amount of time is being spent on quality issues instead of working on new features, as reflected in Innovation Rate.

How to use

For teams with higher a Defect Rate, look for activities that would naturally drive up bugs: a large new feature, work to re-architect, a quality-focused sprint, etc. Absent these, consider seeding teams with higher Defect Rates with those from teams with low Defect Rates.

Target Range
75% to 100%
Insights
Why this matters

Predictability is a hallmark of a high-performing team. Delivered vs. Final Plan speaks directly to a team’s ability to forecast accurately and meet its targets, to do what it says it will do.

Delivered vs. Final Plan is one of the variables in determining Sprint Health.

How to use

To say that teams should strive for better, more accurate planning is obvious. To say they should become infallible is silly. Software development is complex and fluid, and the real goal is quick adaptability in the face of change. Delivered vs. Final Plan is a way for teams to keep themselves honest about their planning maturity.

Insights

Forecast Work Months is an automatic prediction of the time remaining to complete the assigned work, based on array of variables including issue type and priority, as well as team-specific variables such as historic on-time delivery rate, Throughput, and Cycle Time. Other related factors, such as a team’s backlog, are also included.

Why this matters

Manual estimates are little more than a guessing game. With Forecast Work Months, Pinpoint uses data science to model forecast time remaining.

Our predictive model is applied at the lowest, or most granular issue level. This means that where parent or grouping issues are used (e.g. stories, epics), we traverse the entire issue tree, calculating Forecast Work Months for each child issue, and roll up from there.

How to use

Like every Pinpoint signal, Forecast Work Months is meant to supplement, not replace, human intelligence. Manual estimates by subject matter experts remains a necessary part of any project forecast. What Forecast Work Months provides is a reality check. It’s entirely possible for teams to complete work more quickly than their historical averages, but assuming they will is risky.

Insights
Why this matters

Ask most software organizations what their ratio is between innovation work and keeping the lights on (KTLO), and you’re likely to get a blank stare. And yet knowing the answer goes to the heart of understanding the technology organization’s value to the business. By illuminating the percent of work spent on new features, Pinpoint aims to help tech organizations better understand their value, or better publicize it, or both.

How to use

A low percentage isn’t automatically cause for concern. Companies may assign certain teams to maintenance-style projects, for which a low (or zero) Innovation Rate is expected. For teams whose Innovation Rate is expected to be higher, a natural diagnostic step is to investigate their Defect Rate. A higher Defect Rate is bound to impact time spent on new features.

Insights

To calculate Issue Days, we first look at the total number of items a person has worked. We then take that total work and multiply it by that person’s average Cycle Time for those issues. That is, total issues worked, multiplied by the average time it took to complete each, as measured in days from start to finish.

So, if for a given period I worked 16 issues, and it took me an average of 11 days to complete each issue, my Issue Days would be 176.

Why this matters

Issue Days is essentially a proxy for workload—importantly, one that’s based on actuals, not estimates. With Issue Days for each person, we can then begin to see how workload is distributed across a team, or a department, or even the engineering organization as a whole.

How to use

Issue Days is a key variable in our models for other performance signals, including Workload Balance and Throughput.

Insights

Issues Worked counts all the issues a person worked in the selected time period, that were completed in that same period. If multiple people work the same issue, it’s counted in each of their Issues Worked totals.

Why this matters

Issues Worked is a primary signal for understanding activity and work volume.

Insights

Performance Score is a composite of several signals use to evaluate overall performance.

The subject of a Performance Score may be a person, team, location, repository or even things like programming languages. Performance Scores are created by calculating the percentile rank of each relevant signal, summing them and then calculating an overall percentile rank from the summed total.

Why this matters

Performance Scores provide a single value that represents how well a subject is performing relative to others.

How to use

Performance Scores can be used to quickly identify high and low performers. You can also examine each individual component of a score to see if there are particular areas that may be improved.

Target Range
75% to 100%
Insights

Planned Issues shows the percent of Completed Issues that were assigned to a sprint. Completed Issues that were not assigned to a sprint are considered unplanned.

If an issue takes multiple sprints to close, it is still counted as planned. Issues that slip across sprints do impact Delivered vs. Planned, however.

Why this matters

For sprint teams, being able to see exactly how much work is planned vs. unplanned is important in understanding overall team performance. Few teams have the luxury of planning 100 percent of their work; no one is immune to emergency requests. But teams with a lower percentage of Planned Issues may be facing emergency asks or other disruptions on a regular basis.

Insights

Where Rework Rate measures reversion in issue state, Review Rework Rate looks at how often code requires further change following code review.

Why this matters

Review Rework Rate helps you to see how efficiently developers are working code to a finished state, according to the standards of the team or organization.

How to use

A high Review Rework Rate may signal that a developer is contending with code that’s higher in complexity, or whose function is poorly defined, or both. It may also be a symptom of speed over thoroughness. In either case, a high Review Rework Rate suggests a person in need of help, maybe in the form of further training or coaching.

Target Range
0% to 10%
Insights

Let’s say a standard issue flow is open -> in development -> in verification -> closed. If an issue proceeds to in verification but subsequently is returned to in development—that is, it reverts to a prior issue state—we flag this as rework. We calculate Rework Rate by tallying all the Completed Issues that had at least one reversion. We then divide this total by Completed Issues for the selected time period.

At the individual level, this means that whenever an issue reverts, the Rework Rate for the person who worked the issue is updated. If multiple people have worked the issue, each of their Rework Rates are updated.

The number of times an issue may revert isn’t relevant to the signal calculation. That is, even if an issue were to revert multiple times to a prior state, for purposes of Rework Rate it is counted only once.

Why this matters

Rework Rate helps you to see how efficiently issues are put to bed—that is, to what degree “done” really means done.

How to use

Teams or people showing higher Rework Rates may be contending with issues that are higher in complexity, or poorly defined, or both. A higher Rework Rate may also be a symptom of haste making waste. Check the team’s or person’s Cycle Time. Poor test coverage and/or low test automation also affect Rework Rate.

Target Range
75% to 100%
Insights

Sprint Health is a composite signal for teams to track and improve their sprint planning and efficiency.

To derive Sprint Health, we examine a sprint’s work at three points: the initial plan, the final plan and what is actually delivered. Looking at these, we calculate:

  • Plan Survival Rate: the percent of issues from the initial plan that remain in the final plan
  • Plan Growth Rate: the percent change in the total number of issues from the initial plan to the final plan
  • Delivered vs Initial Plan: the percent of issues in the initial plan that are completed in the sprint
  • Delivered vs Final Plan: the percent of issues in the final plan that are completed in the sprint
  • Unplanned Delivery Rate: the percent of issues completed in the sprint that were not in the initial plan

These five measures are averaged to produce the Sprint Health value for all sprints completed during the selected time period.

Why this matters

A team’s ability to accurately plan and execute sprints is the bedrock of effective, predictable delivery. Sprint Health is a way for teams to keep themselves honest about their planning maturity and the ability to hit their targets.

How to use

Sprint Health can be a revealing diagnostic for optimizing a team’s sprint mechanics or for troubleshooting pipeline signals like On-time Delivery. A lower Sprint Health score can indicate a team is too optimistic in its planning, isn't allowing for unplanned work, and/or is suffering from scope creep. Examining the Sprint Health funnel graph (the distribution view on the Sprint Health page) should point to which of these factors are at play.

A low score may also be a sign the team not striving to deliver a specific set of scope in their sprints, but instead is using the sprint plan as a rolling backlog, where unfinished work gets pushed from one sprint to the next. Depending on the type of work the team performs, this may be fine–for example, if a team’s primary focus is addressing support tickets. Teams delivering functionality will likely want to get more deliberate in their planning and the ability to predictably hit their targets, so they can improve their results over time.

Insights
Why this matters

Fundamental to any organization is understanding whether the actual work effort of teams matches the priorities of the company. Strategic Issues is a signal of exactly this. It shows how much of each team’s work is contributing to the delivery of the company’s strategic initiatives.

How to use

A low percentage isn’t automatically cause for concern. Companies may designate certain teams whose primary responsibility is for maintenance-style projects, for which a low (or zero) Strategic Issues percentage is expected.

For teams whose Strategic Issues work should be higher, it’s logical to investigate whether the problem is one of prioritization or throughput. In the former case, maybe the team has been drawn into larger-than-expected bug fix work. The distribution of their issue work can be found on the Closed Issues view. To evaluate a team’s throughput, start by checking Cycle Time and Rework Rate for signs of inefficiencies that may be holding the team back.

Insights

Completed Issues per person per month x Cycle Time = Issue Days.

There are two variables to software engineering throughput. To understand the quantity of work getting done, we look at the number of issues a team completes, per person, per month. It’s important to divide the amount of work a team gets done by the number of people on the team. This normalizes the figure, allowing for a team by team comparison, regardless of the size of each team.

The second variable is the complexity of the work completed. Unlike manufacturing, our work units, our “widgets,” are seldom of equal size. The work required to complete a single issue might range from tweaking a line of code to rethinking a layer in the architecture.

The usual way to try to account for work size or complexity is through something like story points. But there’s a problem: story points make team by team comparison impossible. Because they work to obscure the time aspect, story points are—by design—team-specific.

Instead of story points or any other kind of estimation magic, we use Cycle Time, that is, the actual average number of days it took to finish the given issues. Cycle Time here serves as a proxy for size or complexity.

Why this matters

Since the goal of any software team is to complete issues, Throughput is an important signal of team effectiveness and process maturity.

How to use

Imagine a team that completes an average of 10 issues per person per month, with an average cycle time of five days. Another team completes five issues per person per month, with an average cycle time of 10 days. Both teams have a throughput of 50. The team with the longer cycle times is either working on more complex projects, or is possibly less efficient. Teams looking to improve their Throughput should evaluate many of the same things that drive Cycle Time improvement: user stories composed in the smallest possible increments of useful functionality; an optimized team size; strong use of automation, etc.

Target Range
85% to 100%
Insights
Why this matters

There are several ways Traceability is important:

  • Understand the real distribution of work: When it comes to understanding the kinds of work that are consuming people, it’s not what you think is happening or what people say is happening—it’s what’s actually being written. Truth in code. With Traceability, Pinpoint can show you how much of the coding effort is being spent on new features, how much is being spent fixing bugs, and how much is spent doing maintenance—and you can see these splits by person, team, or for the company as a whole.
  • Quantify technical debt: By analyzing the connections between code and issues, we can also show you the phase of your different repositories—which are used mostly for maintenance work, versus those used for new features. It also uncovers potential hotspots for technical debt, by showing which files within a repo have a high percentage of bugs, as well as the people and teams whose code tends to produce the most bugs. These data are a starting point to putting real numbers to your technical debt.
  • Find the experts: For larger teams, especially those taking on new initiatives, it’s useful to know at a glance which people in the organization have skills most suited to the work at hand. Refactoring an app? Maybe you want to see who’s squashed the most bugs in JavaScript. Pinpoint uses traceability to show who’s done what kind of work, and in what language(s), making it easy to identify the right pros for the job.
  • Measure your own contributions: Pinpoint’s analysis of the code-issue connection means a developer can see, for example, how her code contributed to an improved user experience or better conversion rate.
Insights

We use the 80/20 rule as a means for understanding Workload Balance—that is, how evenly work is spread across a team. Specifically, Workload Balance evaluates what percent of the available team capacity is involved in delivering 80 percent of the work.

To calculate a person’s work capacity, we use Issue Days. Issue Days is essentially a proxy for workload—importantly, one that’s based on actuals, not estimates. With Issue Days for each person, we can see how workload is distributed across a team, or a department, or even the engineering organization as a whole.

This is where the 80/20 rule comes in. Let’s assume a team with ten people. We might see that for a given period, six of the ten team members, 60 percent, accounted for 80 percent of the Issue Days in the period.

To convert this into Workload Balance, we divide that 60 percent by 0.8. Why? Because we’re using the 80/20 rule, and because we want an even distribution of work to score as 100 percent. If eight of ten people on a team are responsible for 80 percent of its work—i.e., a perfect distribution according to the 80/20 rule—then we want to reflect their Workload Balance as 100 percent ((8÷10)÷0.8), not 80 percent.

In the example above, this team has a Workload Balance of 75 percent: (6÷10)÷0.8. Not bad, but still with room for a better balance of work across the team.

Why this matters

Knowing and being able to show a team’s Workload Balance makes it possible to:

  • Find underutilized capacity
  • Anticipate overwork / burnout risk
  • Quantify the case for adding more people