At Centrum we are often brought in by organisations that want to improve the quality of their software deliverables and remove some of the unwanted “excitement” from the delivery process. We love engagements like this because it means that the client understands that there is a cost to neglecting to focus on quality and that they are open to changing processes and tools to move forward and start paying off that technical debt.
Unit test coverage – the easy bit…
How a drive for change often starts is that a new “green fields” project is chosen and high unit test coverage is encouraged (or enforced) perhaps along side practices such as TDD. The benefits can be seen by the team involved in the project and this message is taken on board by management. Unit Testing has been deemed to be “a good thing”.
So now for the legacy code…right?
So now the organisation or team has bought into the benefits of having a good level of unit test coverage and want to roll it out to all their projects. However, the problem seems insurmountable. The code analysis shows that your current coverage is at < 2%. How do you get up to your target? Often the response is to only enforce coverage on the new projects that were built from day 1 enforcing high coverage. This can mean that you are actually enforcing standards on a tiny proportion of your organisations code. Another option is of course to invest in writing the test cases for legacy code. However, this investment is rarely made made nor is it necessarily recommended. Test cases are most valuable when written before 0r at the time that the code is written.
The third way. Ratcheting up coverage
What we often recommend when we hit the situation outlined above is to take a continual improvement approach. Find ways to gradually improve the quality of your code and build momentum. Find some metrics that can show a positive view of the improvements been made, don’t simply compare your legacy projects 2% coverage with your green fields project at 80%. The 80% is an impossible short-term target and actually acts as a disincentive to improvement.
Sonars now reports coverage of recent changes
Sonar has just introduced functionality to show the coverage on recent changes. This allow you to enforce coverage on every line of code added or changed during a project and over time your overall coverage will get there. It also has the effect of introducing tests for those parts of your code base that change more frequently and therefore get the most value out of them.
What is also pretty neat is the ability to show the source code marked up with only the code that is untested, but only for the period that you are interested in. This gives developers the feedback they need to write tests that cover changed code.
Footnote: Sonar for the uninitiated
Sonar is an open source quality platform. It collates and mines data from a variety of code analysis tools as well as it’s own in built ones to give you a holistic view of the quality of your software. The “7 axis of code quality” as described by Sonar are: ArchitectureDesign, Duplications, Unit Test, Complexity, Potential Bugs, RulesFormatting & Comments (Documentation).
There are many reasons and excuses why some applications are untested by automated tests, or at least not very tested. It could be an older application, the application might have been hard to test, or people writing it simply did not have the habit of writing automated tests. Having said that, most of us either inherited or created an untested mess at one point or another. This article attempts to explore techniques about increasing code coverage for such projects.
Say we set a goal of 80% test coverage and fail builds if they don’t meet that threshold. It is naturally quite feasible to meet this goal with every new check-in. Thus it is much easier set and enforce these kind of code coverage targets on new projects rather than on older, untested ones. It would be very disruptive to expect and enforce 80% code coverage on a project that is 5% tested straight away. A more gradual approach is necessary.
Revisiting the case of an older untested project, lets see how we can work towards gradually increasing code coverage. When working with an existing project, the only thing developers can claim is that new code will be tested. Luckily, this claim can also be enforced by using tools like Clover and its history threshold. The history threshold can be used to enforce that coverage is not decreasing, meaning that if new code is added to the application, it needs to be covered by tests. This practice can be beneficial in building a culture of automated testing while increasing code coverage for an application. Eventually, if efforts are taken to increase code coverage for the rest of the application, an absolute threshold can be instated to control that coverage does not fall below a certain acceptable level.
Testing can be disruptive
While we are fervent practitioners of automated testing and test driven development, we do recognise that in some situations, creating test for each line of code can be a bit disruptive. Naturally, throwaway proof of concepts don’t need the same strict level of code coverage as applications that will need maintenance. Furthermore, it is not trivial to work with and test code using frameworks, tools or languages unfamiliar to the team. There are also frameworks that are not easily testable. But as a team learns know to write automated tests effectively, the technique described above can be applied to make up for the initial looseness.
Finally, while all code should be checked by automated tests, we find that this is not always the case. Many reasons lead to untested code, but there are tools and techniques like Clover’s history threshold that can put a project back on track in a controlled and steady fashion. When a situation is bad, we can at least ensure that we are making it better, rather than adding to the problem.