Design, Code, and Technical Debt Reduction.
For software engineering teams, technical debt is like climate change. It’s an issue of multitudes. There’s rarely enough daily pain to spur action, even if we know the long-term picture is grim. And it gets worse every day it’s not addressed.
Tackling technical debt means being good at two things:
- Quantifying the cost. The broadness of “technical debt” can trigger analysis paralysis. The key is not to try to boil the ocean. Instead, we want to identify points of leverage: where, across repositories, is the cost of debt highest—and where is the best ratio between investment and return?
- Preventing more debt. As obvious to say as it is hard to do. Debt accumulation is a daily occurrence—every code commit carries the risk of adding tech debt to the bottom line. But machine learning is changing the game.
At Pinpoint, we use data and machine intelligence to attack technical debt.
How to quantify the cost of technical debt
Broadly speaking, the benefit of legacy code is stability. Legacy code earns its legacy: it works just well enough to keep it around. The cost is maintenance—caring for repositories filled with older, more brittle, less well-understood programming languages. This tradeoff may be generally understood, but by leaving the cost of legacy code unquantified, it’s hard to know what price is actually being paid by the organization.
With Pinpoint, we look at this through the lens of Cycle Time. Cycle Time is a signal we automatically derive from systems like Jira, which measures the average number of days from start of work to completion. Because we also harness the metadata of code commits linked to these work items, we can assign a Cycle Time value by code repo and programming language.
Let’s take a specific example. In this case, a customer with more than 50 programming languages in use. When we analyzed the Cycle Times across the languages that had at least 1,000 linked issues, there was a clear outlier: Java. The average Cycle Time for work done in Java was 12 days longer than the average for all languages:
By quantifying the impact on Cycle Time, we’ve taken the first step to showing the tax incurred by working with older languages. To complete the picture, we need to frame it in terms business leaders will pay attention to. We need to show the estimated dollar figure of this “tax.”
Last year, this customer completed roughly 27,000 issues, with an average Cycle Time of 16 days per issue. This means an annual work capacity of 432,000 days (27,000 tickets * 16 days). To change this into dollars, we divide by the annual labor cost to get $93 per capacity day (432,000 capacity days / $40 million).
Approximately ten percent of the 27,000 tickets was for work in Java codebases. Because our cycle time data shows that that work will take on average 12 days longer to complete, this means an extra 32,400 capacity days spent. In dollar terms, this is an extra $3 million per year:
With this data, we now know the carrying cost of doing nothing. This is essential. It means we can now have a rational, data-driven conversation about cost and benefit, and decide whether action is warranted.
Using machine learning to measure code risk
“Code risk” here means the chance that any given code commit is going to add to the organization’s technical debt, or raise the risk of defects. Pinpoint uses machine learning to make this assessment.
Our first step in evaluating code risk is to determine which code commits relate to fixes. To do this, we use Natural Language Processing (NLP), along with topic modeling, to analyze commit comments and determine whether the commit is fix-related. With a hardened fix identification model, we then identify the last commit to touch the same lines as the incoming commit. For each line of code changed to fix a bug, we trace back to the prior commit(s) that changed the same line. This information, combined with our fix data, allows the model to spot the commits that introduced a bug. We integrate this subset of commits back into our full dataset, labeling them as bug versus other.
We then layer in other indicators that are useful to predicting code risk—these include demographic signals, like the person or team making the commit, as well as more granular inputs, such as programming language, file age and linked issue data. By analyzing hundreds of thousands of fix commits and comparing them with non-fix commits, the model discovers patterns that correlate positively with code risk. With this, Pinpoint provides a way to see not only which repos have a higher debt load, but also the characteristics of commits that contribute to that debt.
Technical debt is traditionally hard to quantify, and hard to prevent. But Pinpoint enables a modern way to attack it—a data-driven way to understand not only where debt is highest, but where and how debt may be accumulating as it happens.