The second edition of the TechDebt Conference will be held jointly with ICSE 2019 in Montreal, Canada, May 26–27, 2019. The conference is sponsored by ACM SIGSOFT and IEEE TCSE.
All details are here.
The SQALE method just passed an important milestone. Indeed, since the launch of the method on the sqale.org site in August 2010, over 10,000 people have downloaded the definition document. This is quite impressive in regards of the technical (and tedious) nature of the document. Of the 22,000 site visitors, nearly half of them have downloaded this document.
It’s impossible to know the exact number of current users. The method is now supported in the open-source version of SonarQube (the most used static analysis tool according to a recent survey. Today, hundreds of thousands of developers monitor some SQALE indicators in their daily quality dashboard. This makes SQALE the number one method for managing technical debt.
The SQALE Method is used worldwide, but it is impossible to know exactly the geographical distribution of the current users. The geographical distribution of the site visitors may quite well represent the user distribution. From the sqale.org web statistics, the majority of visitors are located in the USA. The table below shows the detailed distribution of the site visitors during the last month.
I previously explained the use of the SQALE Pyramid here . I will explain here how to use the Debt Map indicator.
We can produce this indicator at 2 levels:
• The first level is the Project level. In this case each point of the map is a file.
• The second level is the Portfolio level. In this case each point is an application.
We’ll see how to use this indicator in each case.
The Project level Debt Map
In this case, each file of the application is placed on the graph (X and Y axes) according to two measures:
X is the total amount of Technical Debt of the file. This is the estimated time required to fix any non-conformity identified. The higher this value is, the more time will be needed to get a “right” file.
Y is the cumulated “Non-Remediation Cost” of all non-conformities identified in the file. The concept of “Non-Remediation Cost” was presented and explained here. To summarize, this represents the business impact of the non-conformities. The higher the value, the greater the risk incurred if the file is delivered as it is.
Figure 1: SQALE Debt Map at file level
This graph allows you to quickly analyze all the files of the application. If we take the example of Figure 1:
This graph is also useful to make decisions about remediation priorities. For example, in the case of a project working on an application with a legacy part.
If you have very little time available, you will refactor File 4 because it has little debt but this debt is potentially highly damageable. Compared to refactoring File 3, your task will have a much higher Return on Investment.
If you have more time available, you will extend the operation to all the files with a Non-Remediation Cost over a given threshold. As an example, you will decide to refactor the 5 files whose Non-Remediation Cost is over 500. By doing so, you will decrease significantly the level of exposure of your users.
The Portfolio level Debt Map
In this case, the points on the map are applications. Each application is positioned according to its Technical Debt density and its Non-Remediation Cost density.
This allows to analyze the situation of a complete portfolio and to compare all the different applications whatever their technology, size and context.
If we take the example of Figure 2:
This will help to analyze the situation and identify which part of your portfolio needs attention.
Figure 2: SQALE Debt Map at application level
If an application provides very little « business value » and its annual maintenance workload is very low, the fact that it is not very well positioned in the Map Debt is not worrisome.
On the contrary, if an application is very critical, and its code is not of good quality (that is to say, it is positioned at the top right of the map), we understand that this represents a risk and improving the code of this application may be of high priority.
Meaningful insights into your Technical Debt
The SQALE Pyramid is certainly the most useful indicator of the SQALE method. It gives a lot of information on the nature of the technical debt and thus helps to make decisions. I will try to show how it helps to answer questions that often arise once you have quantified the technical debt of your application.
Imagine that you have analyzed the code of your application or your project and the total technical debt estimated with SQALE is 50.7 days.
We will run through some questions that could be asked and will see how the SQALE pyramid helps to respond.
Is it a short-term or long-term debt?
The SQALE pyramid shows the distribution of the technical debt according to the chronology of expectations during the life cycle of a code file. The short term parts of the technical debt are the lower layers of the pyramid (Testability and Reliability) and the parts that will have an impact in the longer term (Maintainability, Portability, Reusability) are the upper layers of the pyramid.
The following example (as reported by the SonarQube tool) shows the distribution of a debt of 50.7 days: there are 13.8 (4.0 + 9.8) days with rather short-term and 30 days of a long-term nature. This is long term as the impact of this debt will only be perceived when transferring the maintenance of the code to another team.
How critical is my technical debt?
All the issues found in the code are not identical. Some may have high negative impact on the business, such as security or reliability related issues. Following the Technical Debt metaphor, this is the part of the debt with the highest interest. In this category, you will find issues such as logic errors or mismanagement of exceptions.
Other issues are less critical because their presence won’t directly affect the business.
In the example below, the amount of critical debt (that means related to the « Reliability » and “Security ” layers of the pyramid) is 10.4 days, or 20% of total.
How much effort should I spend to make my code more reliable?
Firstly, if you want your code to be reliable, you should also include into your Quality Model a requirement related to test code coverage. This will ensure the efficiency of your test activities (either unit test, integration and/or functional tests). This requirement (e.g. 80 % lines coverage rate for all files) should be integrated into your SQALE Quality Model under the Reliability characteristic.
As explained in various articles available on this site, in order to ensure reliability of the code, you should at least solve all the issues related to testability and reliability. So the effort to spend is the sum of the Testability and Reliability debt (which in the SQALE Method is called the SQALE Consolidated Reliability Index -SCRI). In our example it is 13.8 days.
This effort is necessary to improve the reliability of the application, but of course, this is not sufficient. The reliability of your application depends also on other efforts performed on additional activities like peer reviews, beta testing, etc.
How much effort should I spend to make my code more maintainable? (in other words, to reduce the required annual charge to fix bugs and implement Change Requests)
The same logic applies. You must look at the SQALE Consolidated Maintenance Index (SCMI). To reduce future maintenance costs, you should resolve issues related to testability, reliability, changeability, security, performance and maintainability. In this example, you will need to spend a 50.7 days’ workload.
Where do I start to repay the technical debt of my code?
They are multiple strategies for setting refactoring priorities.
The most relevant one depends mainly on your context and especially on the budget you can allocate to this activity.
Let’s illustrate 2 cases:
1 – You are far from the delivery date, and so you can allocate a workload representing a large percentage of your total technical debt (at least 60%)
In this case, you need to improve the quality of code by first making it testable. That is to say, solve such issues as too complex methods, duplicated code, etc. Then you pay back the debt associated to the next upper layer of the SQALE pyramid, which is the reliability, and so on.
2 – You have very little time. You can’t repay the debt related to testability because it’s structural and so time consuming. So, you will deliver your application with remaining debt, thus it would be wise to reduce the criticality of this debt. You will focus your efforts to correct the critical issues, the ones with the highest potential business impact. These are the issues related to reliability. In this case you will need 9.8 days.
It should be noted that this last strategy to improve the code is not optimal, because maybe you’ll fix potential bugs in pieces of code that should be refactored for testability reasons. This time may then be lost. We can say that this is the « quick and dirty » way to manage Technical Debt.
As shown, this pyramid helps to answer many questions related to source code quality. To summarize, the SQALE pyramid helps you to analyze and understand your technical debt on three aspects:
Instead of communicating just the total amount of Technical Debt, it is more useful to report its distribution in the form of a SQALE Pyramid. This should be part of good Project Management Dashboards.
P.S. The Pyramid helps to answer many other questions, I covered another one: “How agile is your code?” in a previous post here.
Dedicated articles covering all aspects of the Technical Debt paradigm, including « Managing Technical Debt with the SQALE Method ».
Among the particularities of the SQALE method, there is one whose importance is not always well understood. I’ll try to explain it in this post.
The SQALE Quality Model identifies quality characteristics and put them in a chronological order. It appears that the first one at the bottom is Testability.
This means that even before you look at the reliability of your code, its performance, its safety, its maintainability by third parties etc… You must first look at its testability and fulfill the associated requirements.
If your code is not testable (that means it is too much complex, too much coupled …), you will not be able to test it adequately before delivery. You won’t be able to check and improve its reliability and safety. Also later, when you will make changes and corrective maintenance on your application, you won’t be able to test and check correctly your work.
This leads to the conclusion that testability is the foundation upon which all the other quality characteristics rely. This does not appear in standards such as ISO 25010 and it does not help to raise the importance of this characteristic.
Because all other abilities depend on testability, if you want to improve the overall quality of an application, you must start by improving its testability. That means refactoring its architecture and its internal structure in order to make it completely testable.
Since its introduction by Ward Cunningham, the concept of technical debt is quite well recognized and used more and more by project managers to monitor their project.
What is quite surprising (and also beneficial) is the fact that this quite technical concept is also used and supported by middle and upper managers. I have already mentioned within a previous post that the CIO of a very large bank (30,000 + developers) monitors on a quarterly basis, and with the SQALE method, the technical debt of its complete portfolio.
There are probably many reasons for this growing interest and each manager will have his own. Here are the reasons, which in my opinion, are the most common.
Technical debt is probably the first code related measure which fulfills the measurement needs of managers; it also fits well into their favorite tool: Excel. Their interest for this measure is not a temporary fashion. Technical debt is becoming part of many management dashboards and will support more and more portfolio management decisions.
Your strategic decisions will depend on the precision of your Technical Debt estimations. Make sure that your estimation model is calibrated to your context.
It is sometimes necessary to change the maintenance mode of a legacy application and switch it to an agile mode.
In this case, we must ask ourselves whether the source code of the project in question does not contain too much technical debt inherited from years of maintenance. If the inherited debt is too high, it is likely that it does not lend itself to an agile maintenance mode. How do I know which applications will be eligible to a maintenance type change, and those who do not?
We will see that the SQALE method provides a real help to such kind of decision.
What we want to avoid is that the poor (or very poor) quality of the application source code hampers the maintenance activities of the team. In this case the maintenance team will be far from reproducing the expected productivity achieved in other agile projects.
In a SQALE quality model, the first three expected quality characteristics (those shown at the bottom of the SQALE pyramid) are testability, reliability and changeability. An agile team performs cycles where activities of testing, debugging and change keep coming at a high speed. Their velocity depends mainly on their productivity for these 3 activities. The part of the technical debt that corresponds to these activities is thus the main concern. Other parts of debt such as that related to performance or safety will have a very limited impact on the team’s productivity.
In the SQALE method, the debt specific to these three characteristics is called SCCI (SQALE Consolidated Changeability Index). This index represents the « agile debt » of your code. When you divide this value by the size of the code, you get the density of this debt. This index called SCCID (SQALE Consolidated Changeability Index Density) represents the « agility » of your code.
If you look at the two SQALE pyramids below (which show the distribution of technical debt according to its impact on the life cycle activities), it is clear that the two projects have similar amounts of technical debt but very different distributions.
In one case (Application A) the portion of the debt which is relative to the “agility of code” is relatively low; in the other it is rather high. In the second case, it will be probably beneficial to refactor the code before maintaining it in an agile mode.
It takes some calibration effort to know which threshold should be used within a specific organization and a given context, but this dedicated SQALE index is of very obvious interest for such type of decision.
SQALE Pyramid samples issued by Sonar
I have read many blogs and articles on Technical Debt. I also participated in exciting events on the topic. They are at least 2 major and positive messages that are always raised:
I have a concern about what should be included in Technical Debt.
If, at a point in time, we analyse the source code of an application, we will for sure, have findings, room for improvement. Do we have to count everything as Technical Debt? It sounds logical, but if we take a step back, it seems to me that we should differentiate two types of findings.
1st Category: Findings related to violations of good coding/implementation practices, violations of architecture constraints etc. I will put in this category and as example:
2nd Category: Findings associated to the fact that since the software has been delivered there has been some technology progress. New ways/tools are now available, allowing better stability, changeability, performance etc. Examples that come to mind are:
From my point of view, the second category should not be counted in Technical Debt, it is just obsolescence.
Obsolescence should be used for managing the application, governing a portfolio. In the balance sheet, this figure will have the same negative effect as the Technical Debt. But to be more precise, it should be in specific cell dedicated to evaluating the depreciation of the application not in a “debt” cell.
If we go back to the original quote from Ward Cunningham about Technical Debt, Technical Debt comes from the “not right code”. That means that it comes from violations of source code versus requirements. Ward does not include any additional root causes.
If we include into Technical Debt findings with root causes linked only to technical progress and obsolescence, Technical Debt will increase over time without any change to the code and we will attribute unfair debt to developers.
Does obsolescence count as Technical Debt?
What about differentiating Technical Debt and Technical Obsolescence?
What’s your opinion?
Estimating the value of the Technical Debt of a project is not enough to be able to manage it.
When you have estimated the value of your debt, you’ve just made a first step. You know where you are but it does not help you decide where to go and how to get there.
I have tried to describe here what, personally, “Managing Technical Debt” means. This is certainly not a complete inventory, but I expect at least it will contribute to the debate.
In the following lines and as stated by W. Cunningham, I consider Technical Debt as the result of “not right code”.
In my opinion, “Managing Technical Debt” means to be able at least to perform the following:
1) Set project goals related to Technical Debt. Establish quantifiable goals in terms of amount or density, in terms of nature etc. and answer questions such as:
2) Monitor the amount of Technical Debt over time (either the absolute value or the density) and answer questions such as:
3) Compare the Technical Debt for different projects or subcontractors and answer questions such as:
4) Analyze the temporal origin of the Technical Debt and answer questions such as:
5) Analyze the physical origin of the Technical Debt and answer questions such as:
6) Analyze the technical origin of the Technical Debt, which means obtaining information on different “bad practices” that generated the debt, (and then perhaps launch awareness, coaching sessions on some specific topics) and answer questions such as:
7) Analyze the points you want to address by reducing the Technical Debt and answer questions such as:
8) Analyze the impact of the Technical Debt from a business perspective (that creates issues or risks for the business) and answer questions such as:
9) Set priorities for reimbursing the Technical Debt. Be able to optimize the results of a partial payback of the debt (This is the typical situation as it is rare to have sufficient budget to reimburse all of the debt).
I had initially identified some additional questions but chosen not to keep them because they are too dependent on context, and need some local feedback and calibration, so they can’t be answered immediately after the deployment of a solution. For example:
If I spend 100 hours to decrease the Technical Debt,
I consider that if you have put in place a solution that provides answers to all these questions, then you can really say that you “Manage your Technical Debt”.