coverage

BRIEFING: Delivering Better Software (Tests as a Communication Tool)

Posted on

Come and join us for drinks, socialising and a special presentation
This will be a really informal session, with plenty of opportunities to ask questions and interact. If that doesn’t sell it to you, how about the FREE DRINKS??

The Talk

Completing the circle: Automated web tests, ATDD and Acceptance Tests as a team communication tool

Acceptance Test Driven Development, or ATDD, has proven to be a very effective technique, both for driving and guiding development, and for enhancing communication between developers and other project stakeholders. But why stop there? Well designed Acceptance Tests can also act as a formidable documentation source and communication tool. Indeed, when written in a narrative, BDD-type style, Acceptance Tests have the potential to document in detail how the user interacts with the application.
In this talk we will look at the role of automated Acceptance Tests not only for testing, but also as part of the whole development lifecycle, from writing the user stories right through to deploying the application. We will also look at ways to make your automated acceptance tests more expressive and how to use them more effectively as a communication, reporting and documentation tool.
Finally, we will present and demonstrate a new open source library that helps developers and testers write automated acceptance tests for web applications using WebDriver/Selenium 2. This library also produces clean, narrative-style reports illustrated with screenshots that effectively describe the application’s functionality and behaviour, as well as any regressions or pending features.

The Speaker

CEO of Wakaleo Consulting, John is an experienced consultant and trainer specialising in Enterprise Java, Web Development, and Open Source technologies. John is well known in the Java community for his many published articles, and as author of Java Power Tools, and Jenkins: The Definitive Guide.
John helps organisations around the world to improve their Java development processes and infrastructures and provides training and mentoring in open source technologies, Test Driven Development (TDD, BDD and ATDD), Automated Web Testing, SDLC tools, and agile development processes in general.

The Begging

A fascinating subject that should give you some great ideas and techniques to take back to your team.
This is our first joint event and we’d really appreciate your support. We’ve booked a big room and need to fill it! PLEASE BRING YOUR FRIENDS…

When

Thursday, June 23, 2011 from 5:00 PM – 8:00 PM (GMT+1000)

Where

Combined Services Club (upstairs)
5-7 Barrack Street
(Cnr of Clarence, next to Officeworks)
Sydney, New South Wales 2000
Australia

Registration

So complete the free registration at: http://bettersoftwarebriefing.eventbrite.com

Ratcheting up code coverage with Sonar

Posted on

At Centrum we are often brought in by organisations that want to improve the quality of their software deliverables and remove some of the unwanted “excitement” from the delivery process.  We love engagements like this because it means that the client understands that there is a cost to neglecting to focus on quality and that they are open to changing processes and tools to move forward and start paying off that technical debt.

Unit test coverage – the easy bit…

How a drive for change often starts is that a new “green fields” project is chosen and high unit test coverage is encouraged (or enforced) perhaps along side practices such as TDD.  The benefits can be seen by the team involved in the project and this message is taken on board by management.   Unit Testing has been deemed to be “a good thing”.

So now for the legacy code…right?

So now the organisation or team has bought into the benefits of having a good level of unit test coverage and want to roll it out to all their projects.   However, the problem seems insurmountable.  The code analysis shows that your current coverage is at < 2%.  How do you get up to your target?  Often the response is to only enforce coverage on the new projects that were built from day 1 enforcing high coverage.  This can mean that you are actually enforcing standards on a tiny proportion of your organisations code.  Another option is of course to invest in writing the test cases for legacy code.  However, this investment is rarely made made nor is it necessarily recommended.  Test cases are most valuable when written before 0r at the time that the code is written.

The third way.  Ratcheting up coverage

What we often recommend when we hit the situation outlined above is to take a continual improvement approach.  Find ways to gradually improve the quality of your code and build momentum.  Find some metrics that can show a positive view of the improvements been made, don’t simply compare your legacy projects 2% coverage with your green fields project at 80%.  The 80% is an impossible short-term target and actually acts as a disincentive to improvement.

Sonars now reports coverage of recent changes

Sonar has just introduced functionality to show the coverage on recent changes.  This allow you to enforce coverage on every line of code added or changed during a project and over time your overall coverage will get there.  It also has the effect of introducing tests for those parts of your code base that change more frequently and therefore get the most value out of them.

Sonar Dashboard
Sonar Dashboard

What is also pretty neat is the ability to show the source code marked up with only the code that is untested, but only for the period that you are interested in.  This gives developers the feedback they need to write tests that cover changed code.

Sonar marked up coverage
Filtered code coverage

Footnote:  Sonar for the uninitiated

Sonar is an open source quality platform.  It collates and mines data from a variety of code analysis tools as well as it’s own in built ones to give you a holistic view of the quality of your software.  The “7 axis of code quality” as described by Sonar are:  ArchitectureDesign, Duplications, Unit Test, Complexity, Potential Bugs,  RulesFormatting & Comments (Documentation).

Gradually increasing code coverage in untested projects

Posted on

Untested code
There are many reasons and excuses why some applications are untested by automated tests, or at least not very tested.  It could be an older application, the application might have been hard to test, or people writing it simply did not have the habit of writing automated tests.  Having said that, most of us either inherited or created an untested mess at one point or another.  This article attempts to explore techniques about increasing code coverage for such projects.

Coverage targets
Say we set a goal of 80% test coverage and fail builds if they don’t meet that threshold.  It is naturally quite feasible to meet this goal with every new check-in.  Thus it is much easier set and enforce these kind of code coverage targets on new projects rather than on older, untested ones.  It would be very disruptive to expect and enforce 80% code coverage on a project that is 5% tested straight away.  A more gradual approach is necessary.

Moving forward
Revisiting the case of an older untested project, lets see how we can work towards gradually increasing code coverage.  When working with an existing project, the only thing developers can claim is that new code will be tested.  Luckily, this claim can also be enforced by using tools like Clover and its history threshold.  The history threshold can be used to enforce that coverage is not decreasing, meaning that if new code is added to the application, it needs to be covered by tests.  This practice can be beneficial in building a culture of automated testing while increasing code coverage for an application.  Eventually, if efforts are taken to increase code coverage for the rest of the application, an absolute threshold can be instated to control that coverage does not fall below a certain acceptable level.

Testing can be disruptive
While we are fervent practitioners of automated testing and test driven development, we do recognise that in some situations, creating test for each line of code can be a bit disruptive.  Naturally, throwaway proof of concepts don’t need the same strict level of code coverage as applications that will need maintenance.  Furthermore, it is not trivial to work with and test code using frameworks, tools or languages unfamiliar to the team.  There are also frameworks that are not easily testable.  But as a team learns know to write automated tests effectively, the technique described above can be applied to make up for the initial looseness.

Wrapping up
Finally, while all code should be checked by automated tests, we find that this is not always the case.  Many reasons lead to untested code, but there are tools and techniques like Clover’s history threshold that can put a project back on track in a controlled and steady fashion.  When a situation is bad, we can at least ensure that we are making it better, rather than adding to the problem.