There are many reasons and excuses why some applications are untested by automated tests, or at least not very tested. It could be an older application, the application might have been hard to test, or people writing it simply did not have the habit of writing automated tests. Having said that, most of us either inherited or created an untested mess at one point or another. This article attempts to explore techniques about increasing code coverage for such projects.
Say we set a goal of 80% test coverage and fail builds if they don’t meet that threshold. It is naturally quite feasible to meet this goal with every new check-in. Thus it is much easier set and enforce these kind of code coverage targets on new projects rather than on older, untested ones. It would be very disruptive to expect and enforce 80% code coverage on a project that is 5% tested straight away. A more gradual approach is necessary.
Revisiting the case of an older untested project, lets see how we can work towards gradually increasing code coverage. When working with an existing project, the only thing developers can claim is that new code will be tested. Luckily, this claim can also be enforced by using tools like Clover and its history threshold. The history threshold can be used to enforce that coverage is not decreasing, meaning that if new code is added to the application, it needs to be covered by tests. This practice can be beneficial in building a culture of automated testing while increasing code coverage for an application. Eventually, if efforts are taken to increase code coverage for the rest of the application, an absolute threshold can be instated to control that coverage does not fall below a certain acceptable level.
Testing can be disruptive
While we are fervent practitioners of automated testing and test driven development, we do recognise that in some situations, creating test for each line of code can be a bit disruptive. Naturally, throwaway proof of concepts don’t need the same strict level of code coverage as applications that will need maintenance. Furthermore, it is not trivial to work with and test code using frameworks, tools or languages unfamiliar to the team. There are also frameworks that are not easily testable. But as a team learns know to write automated tests effectively, the technique described above can be applied to make up for the initial looseness.
Finally, while all code should be checked by automated tests, we find that this is not always the case. Many reasons lead to untested code, but there are tools and techniques like Clover’s history threshold that can put a project back on track in a controlled and steady fashion. When a situation is bad, we can at least ensure that we are making it better, rather than adding to the problem.