continual improvement

Latest Jenkins Newsletter – Fall 2013

Posted on

The latest Jenkins newsletter, Continuous Information, has just been released. Its a great source for All Things Jenkins. To check it out, visit this link

 

Why not subscribe to the newsletters here

http://www.cloudbees.com/jenkins/jenkins-ci/jenkins-newsletter.cb

 

Or you can keep visiting our Blog – you never know what else you might find!

 

BPM Tip # 2 – Design

Posted on

When you think BPM, what do you really mean? More jargon? A philosophy? A system? What is BPM?

 

There’s a heap of vigorous discussions being had the world over, but what it really means, is Business Process Management – a way for a business to know & manage what they do, so that they get the results they expect to get all the time. Whether that means that bills come in, processed & paid correctly and on time, or whether its when the customer comes in, orders their coffee & walks out satisfied that they’ve got the kick-start for their day – they are both processes that someone needs to follow (hopefully) the same way every time & there is (hopefully) someone accountable if things go wrong.

 

You can never replace the people in process – maybe in the future robots might to some degree – but until then we can only get help for some of the parts of processes – systems & technology can help, or they can really disrupt operations & be a right pain. But readers beware…if your manager comes back from a conference & runs in claiming that “I have seen the future & behold it is good…” then starts regaling you with details of a system they saw & how it will transform the business, that would be a good place to pause. Others might say be afraid…be very afraid.

 

Please, do yourself a favour – before you start to think about any systems (BPMS), you need to optimise your processes first, or else you may end up automating & delivering a bad result faster than you used to!  To do that, you need skilled people to help transform the processes.These can be from within your business, or external experts brought in to help, either way you need these types. You want to know what issue you are trying to fix, how you might measure it, what success looks like, and a lot of other criteria – you can use these as your litmus test for your improved processes.

 

Think about using experienced experts (aka “process tragics“) – they exist – to help kick start the transformation piece and maybe they’ll suggest automating in small increments, but make sure the experts also impart skills to your own people. After all, you’ll want to keep improving many other processes & you need your own teams to make it become “what they do everyday”. Once you’re doing that, all the time, you’ll find that you truly are living the BPM dream.

BRIEFING: Delivering Better Software (Tests as a Communication Tool)

Posted on

Come and join us for drinks, socialising and a special presentation
This will be a really informal session, with plenty of opportunities to ask questions and interact. If that doesn’t sell it to you, how about the FREE DRINKS??

The Talk

Completing the circle: Automated web tests, ATDD and Acceptance Tests as a team communication tool

Acceptance Test Driven Development, or ATDD, has proven to be a very effective technique, both for driving and guiding development, and for enhancing communication between developers and other project stakeholders. But why stop there? Well designed Acceptance Tests can also act as a formidable documentation source and communication tool. Indeed, when written in a narrative, BDD-type style, Acceptance Tests have the potential to document in detail how the user interacts with the application.
In this talk we will look at the role of automated Acceptance Tests not only for testing, but also as part of the whole development lifecycle, from writing the user stories right through to deploying the application. We will also look at ways to make your automated acceptance tests more expressive and how to use them more effectively as a communication, reporting and documentation tool.
Finally, we will present and demonstrate a new open source library that helps developers and testers write automated acceptance tests for web applications using WebDriver/Selenium 2. This library also produces clean, narrative-style reports illustrated with screenshots that effectively describe the application’s functionality and behaviour, as well as any regressions or pending features.

The Speaker

CEO of Wakaleo Consulting, John is an experienced consultant and trainer specialising in Enterprise Java, Web Development, and Open Source technologies. John is well known in the Java community for his many published articles, and as author of Java Power Tools, and Jenkins: The Definitive Guide.
John helps organisations around the world to improve their Java development processes and infrastructures and provides training and mentoring in open source technologies, Test Driven Development (TDD, BDD and ATDD), Automated Web Testing, SDLC tools, and agile development processes in general.

The Begging

A fascinating subject that should give you some great ideas and techniques to take back to your team.
This is our first joint event and we’d really appreciate your support. We’ve booked a big room and need to fill it! PLEASE BRING YOUR FRIENDS…

When

Thursday, June 23, 2011 from 5:00 PM – 8:00 PM (GMT+1000)

Where

Combined Services Club (upstairs)
5-7 Barrack Street
(Cnr of Clarence, next to Officeworks)
Sydney, New South Wales 2000
Australia

Registration

So complete the free registration at: http://bettersoftwarebriefing.eventbrite.com

Centrum Systems at Agile Australia 2011

Posted on

Centrum Systems will be sponsering Agile Australia 2011.

Agile Australia is going to be packed with case-studies of how leading businesses are adopting an Agile approach to stay ahead!   As well as Agile dignitaries Alistair Cockburn and Martin Fowler, international Agile guru Jean Tabaka, and celebrated Australian industry author Rob Thomsett

  • Learn how to respond quickly to change, minimise overall risk, improve quality, and enhance project outcomes
  • Discover compelling examples of innovation and business value achieved through Agile

Please come to our stand and say hello…

Coding standards harmony

Posted on

Coding standards

Most mature software development companies or departments define their coding standards.  The intentions is simple; ensure all code looks alike to ease reading, writing, maintaining and communicating code.  As a first effort, these coding conventions may be expressed in some form of standalone document, but conventions that are not being enforced are simply a waste of time.  In the Java world, various tools have existed for some time now, helping us to enforce and adhere to coding standards; Checkstyle, Eclipse and Sonar.   Until fairly recently, it has been laborious to make those tool work together and help us achieve code consistency.  Thankfully, as the mentioned tools matured, it is now possible to define and enforce coding standards effortlessly and the synergy between these tools may even be surprising.

Recap

Lets quickly state the purpose of each tool before we move on.

Checkstyle

Checkstyle is a development tool to help programmers write Java code that adheres to a coding standard. It automates the process of checking Java code to spare humans of this boring (but important) task. This makes it ideal for projects that want to enforce a coding standard.

Eclipse formatter

The Eclipse formatter is a set of rules that defines how code will be formatted.

Eclipse clean up

The clean up utility helps to apply formatting rules and coding conventions to a single file or to a set of files in one go.

Eclipse save actions

Save actions are similar to clean up and they define what should happen to the code when a file is saved.  For example, save actions can ensure code is formatted, unused imports are removed and arguments are set to “final” right before the file is saved.

Sonar

Sonar is an open platform to manage code quality.

A common situation

It would be quite common to define coding standards using Checkstyle and including it as part of a project.  Then Eclipse formatter, clean up and save actions would be configured manually to match the Checkstyle rules.  In addition, Checkstyle would run as part of a build to publish the code violations report to a file or to Sonar.  Some better integrated teams would also use the Checkstyle Eclipse plugin in order to see the violations in their code and as code would change and adhere to standards, the Checkstyle Eclipse plugin would reflect that.

Shortcomings

The common situation outlined above is a decent setup but it has some shortcomings.  If the coding standard rules change, that sends a ripple through all the tools.  The Eclipse formatter needs to reflect the new coding standard rules, as well as the clean up and the save actions.  Furthermore, Sonar needs to be updated with the new rules.  In addition, sharing the Checkstyle file between projects and teams can become a choir.  There are ways to define a remote coding standard file used between teams but that does not address the lack of synchronisation between all the tools … until recently.

Harmony

Checkstyle, Sonar and Eclipse have been around for a long time and as these tools matured they developed great integration between them.  By aligning these tools it is possible to establish one central coding standard rule set and reflect those in the development environment automatically.  Furthermore, once configured, changes to coding standards are propagated automatically and developers are always informed about up to date coding standards and apply them as they code.

Example

Lets look at an example on how to best utilise Checkstyle, Eclipse and Sonar.  In addition, to give the example more relevance, lets start with an existing “legacy” project where coding standards have not necessarily been respected.

Assumptions

The example assumes the following:

  • Java project
  • Maven build
  • Checkstyle file expressing the coding standards
  • Sonar
  • Eclipse
  • Eclipse checkstyle plugin
  • Eclipse sonar plugin

Initial coding standard report

We’ll start from the point where an initial Checkstyle configuration has been uploaded to Sonar and a Sonar report has been produced for our existing project.

Reduce violations in IDE

View and reduce violations in the IDE

Next, we’ll configure Eclipse to see those violations closer to the code.  In order to do so, we’ll need to configure the Eclipse Checkstyle plugin with the same rules as Sonar and apply the configuration to the projects.

  1. A link to the Checkstyle configuration
    A link to the Checkstyle configuration
    A link to the Checkstyle configuration 2
    Permalinks
    Checkstyle
  2. Reference to the Checkstyle rules in eclipse (Window > Preferences > Checkstyle > New)
  3. Configure Checkstyle for a project (right click project > Properties > Checkstyle).  Please note the Write formatter/cleanup config checkbox.  This is the part that synchronises the coding standards with the Eclipse formatter and clean up.  You can also right click on your project > Checkstyle > Create Formatter-Profile to achieve the same thing.  This kind of synchronisation alleviates the painful manual synchronisation between Checkstyle and Eclipse; brilliant!
  4. Once Checkstyle has been configured and enabled for a project, notice that violations are annotated by the code
    width="602"
  5. Now that Eclipse has been configured with the coding standard rules and formatting profiles have been updated, we can bulk clean up existing code and go a long way to ensure the code adheres to standards.  After pressing Next > (to review upcoming changes) or Finish, Eclipse would do what it can to help the code to adhere to standards.

    width="602"
  6. After republishing a code standard report to Sonar, we can see a reduction in violations
    width="584"

Save actions

Once the Eclipse formatter and clean up profiles have been updated, don’t forget to update save actions so that as many coding standards are automatically applied as soon as possible (before every save).

Eclipse Sonar integration

Similarly to the Checkstyle Eclipse  plugin, there is a Sonar Eclipse plugin that will annotate code with the violations as seen in Sonar.  In addition to showing Checkstyle violations, the Sonar Eclipse plugin would show Findbugs and PMD violations (all static code analysis tools configured).  The integration is quite simple.

  1. Install the Sonar Eclipse plugin
  2. Identify your Sonar installation
    width="602"
  3. Associate your Eclipse  projects with their Sonar equivalents (notice that your project has to have at least one Sonar report published)
    width="601"
  4. Once the configuration is complete, you can see the violations as published to Sonar annotated in your code
    width="545"
  5. Please note that the code has been annotated with the violations as it was found in Sonar.  If code changes are made, those violations will remain and get out of sync.  Alternatively, you can choose to rerun the checks locally and refresh the violations view.
    width="547"

The usefulness of the Checkstyle Eclipse plugin

Although the Sonar Eclipse plugin may make the Checkstyle Eclipse plugin look superfluous, remember that it is the latter that updates the Eclipse formatting rules as well as the clean up profiles.  If or until the Sonar Eclipse plugin fulfils the same duty, the Checkstyle Eclipse plugin remains very useful.

Not everything can be automated

Please note that although a lot of coding standards can be applied retroactively and automatically, some violations cannot be automatically eradicated.  Nonetheless, Checkstyle, Eclipse and Sonar can identify the problematic code and guide developers towards coding standard compliance.

Conclusion

Coding standards are a preoccupation for most software development teams.  Defining coding standards is one things but enforcing them effectively is another.  Thankfully as Checkstyle, Eclipse and Sonar matured, coding standard definition and enforcement can be a straightforward and a sustainable activity.

Ratcheting up code coverage with Sonar

Posted on

At Centrum we are often brought in by organisations that want to improve the quality of their software deliverables and remove some of the unwanted “excitement” from the delivery process.  We love engagements like this because it means that the client understands that there is a cost to neglecting to focus on quality and that they are open to changing processes and tools to move forward and start paying off that technical debt.

Unit test coverage – the easy bit…

How a drive for change often starts is that a new “green fields” project is chosen and high unit test coverage is encouraged (or enforced) perhaps along side practices such as TDD.  The benefits can be seen by the team involved in the project and this message is taken on board by management.   Unit Testing has been deemed to be “a good thing”.

So now for the legacy code…right?

So now the organisation or team has bought into the benefits of having a good level of unit test coverage and want to roll it out to all their projects.   However, the problem seems insurmountable.  The code analysis shows that your current coverage is at < 2%.  How do you get up to your target?  Often the response is to only enforce coverage on the new projects that were built from day 1 enforcing high coverage.  This can mean that you are actually enforcing standards on a tiny proportion of your organisations code.  Another option is of course to invest in writing the test cases for legacy code.  However, this investment is rarely made made nor is it necessarily recommended.  Test cases are most valuable when written before 0r at the time that the code is written.

The third way.  Ratcheting up coverage

What we often recommend when we hit the situation outlined above is to take a continual improvement approach.  Find ways to gradually improve the quality of your code and build momentum.  Find some metrics that can show a positive view of the improvements been made, don’t simply compare your legacy projects 2% coverage with your green fields project at 80%.  The 80% is an impossible short-term target and actually acts as a disincentive to improvement.

Sonars now reports coverage of recent changes

Sonar has just introduced functionality to show the coverage on recent changes.  This allow you to enforce coverage on every line of code added or changed during a project and over time your overall coverage will get there.  It also has the effect of introducing tests for those parts of your code base that change more frequently and therefore get the most value out of them.

Sonar Dashboard
Sonar Dashboard

What is also pretty neat is the ability to show the source code marked up with only the code that is untested, but only for the period that you are interested in.  This gives developers the feedback they need to write tests that cover changed code.

Sonar marked up coverage
Filtered code coverage

Footnote:  Sonar for the uninitiated

Sonar is an open source quality platform.  It collates and mines data from a variety of code analysis tools as well as it’s own in built ones to give you a holistic view of the quality of your software.  The “7 axis of code quality” as described by Sonar are:  ArchitectureDesign, Duplications, Unit Test, Complexity, Potential Bugs,  RulesFormatting & Comments (Documentation).

Moving the Measures

Posted on

At Centrum we like to say we’re the folks our clients come to when there are some measures that haven’t been moving (and they want them to start moving and move in the right direction)

We call this Moving the Measures and its a mantra we use to guide us in creating powerful and enduring relationships with out customers.

So what do we mean by Moving the Measures?

One of the things that comes up from time to time in conversations with clients, as a Case for Change is built, is some flavour of…

I can see the value you guys bring but when my colleagues ask “How come we are using consultants; why don’t we just do this with internal resources or specialist contractors?” I’m not sure how to answer their question

This is a really great question that deserves a closer look.

Filling the Gap
Specialist contractors are a great source of additional capacity and skills that can be used to augment what’s available from internal resources and bringing on specialist contractors is a great way to provide what’s needed to match a demand for additional capacity and/or fill a skills gap.

Moving the Measures
When Centrum talks about Moving the Measures what we are talking about is getting to the source of what it is that has the measures a client wants to move, actually move and move in the direction the client wants them to move.

So although what Centrum brings to an engagement includes process, skills, know-how, experience, tools and technology, the context in which this expertise is delivered is the context given by the commitment we have co-created with our clients to move the measures our clients care about as part of the engagement.

Moving the Measures is something Centrum is passionate about!

Gradually increasing code coverage in untested projects

Posted on

Untested code
There are many reasons and excuses why some applications are untested by automated tests, or at least not very tested.  It could be an older application, the application might have been hard to test, or people writing it simply did not have the habit of writing automated tests.  Having said that, most of us either inherited or created an untested mess at one point or another.  This article attempts to explore techniques about increasing code coverage for such projects.

Coverage targets
Say we set a goal of 80% test coverage and fail builds if they don’t meet that threshold.  It is naturally quite feasible to meet this goal with every new check-in.  Thus it is much easier set and enforce these kind of code coverage targets on new projects rather than on older, untested ones.  It would be very disruptive to expect and enforce 80% code coverage on a project that is 5% tested straight away.  A more gradual approach is necessary.

Moving forward
Revisiting the case of an older untested project, lets see how we can work towards gradually increasing code coverage.  When working with an existing project, the only thing developers can claim is that new code will be tested.  Luckily, this claim can also be enforced by using tools like Clover and its history threshold.  The history threshold can be used to enforce that coverage is not decreasing, meaning that if new code is added to the application, it needs to be covered by tests.  This practice can be beneficial in building a culture of automated testing while increasing code coverage for an application.  Eventually, if efforts are taken to increase code coverage for the rest of the application, an absolute threshold can be instated to control that coverage does not fall below a certain acceptable level.

Testing can be disruptive
While we are fervent practitioners of automated testing and test driven development, we do recognise that in some situations, creating test for each line of code can be a bit disruptive.  Naturally, throwaway proof of concepts don’t need the same strict level of code coverage as applications that will need maintenance.  Furthermore, it is not trivial to work with and test code using frameworks, tools or languages unfamiliar to the team.  There are also frameworks that are not easily testable.  But as a team learns know to write automated tests effectively, the technique described above can be applied to make up for the initial looseness.

Wrapping up
Finally, while all code should be checked by automated tests, we find that this is not always the case.  Many reasons lead to untested code, but there are tools and techniques like Clover’s history threshold that can put a project back on track in a controlled and steady fashion.  When a situation is bad, we can at least ensure that we are making it better, rather than adding to the problem.

The Power of the Dashboard

Posted on

Some not so good News

Its around 4pm on a Tuesday, we have completed our Review at the end of the third iteration and we are about to start the Retrospective in what is now going to be a four rather than three-iteration Initial Release.

Since starting the project – using Scrum for the first time – the scope has increased nearly 40%, the time frame has gone from 6 weeks to 8 weeks, the expected investment has increased by a third and two of the four person team have been put onto other assignments. To top it off the team’s performance this iteration has gone from a velocity of 31 points per “standard iteration” to just 10 points.

Reporting news like this to a steering committee or a project owner/sponsor would have many project managers bracing themselves as they stepped over the threshold and into the meeting room.

Inspecting and Adapting in Action

What’s striking about this meeting is there is a calmness to the conversations – there’s no upset, no raised voices, no withholds, no pursed lips or forcefully contained admonishments, no blaming, no looking for reasons, no CYA, no resignation. Just conversations about what happened and what there is to learn from what happened.

This is the Inspect and Adapt part of Agile practices applied to the Agile Practice itself in action and what was informing and shaping the conversations was what was being shown on the dashboard.

Two weeks later a very happy Product Owner takes delivery of the Initial Release, clear that the project delivered on time and within budget. At a company dinner a few weeks later colleagues listen attentively as team members share enthusiastically what it’s like to “really do Scrum”.

The team goes on to sustain this healthy productive collaborative environment through a further 5 iterations for a second release and the product is now in production serving the needs of the stakeholders with a very satisfied Product Owner.

/

The Power of the Dashboard

This experience really put a spot light on the power of a Dashboard to inform and shape conversations.

Informing and Shaping Conversations About What Happened

Between the first and second iterations the number of story points delivered in an iteration had increased by 50% whereas the effort invested over the iteration had decreased by 33%. One story had been added, two stories has been re-estimated and one story removed.

The dashboard made it easy for us to see the impact of this variability on the project’s budget, timeline and scope.

Something that was immediately obvious is we could see how the investment performance (as measured by the dollar cost per story point) had doubled between the first and second iterations.


Informing and Shaping Conversations for Time, Budget and Scope

However even with the increase in team performance, completing the release in three iterations looked doubtful. When the team raised slipping the schedule from three iterations to four for the same scope we got confronted by the way the dollars per story point increased by a third, and with that the conversations began to focus on how we might minimise the work required to get each story complete and thereby avoid slipping the schedule.

At the same time, everyone was good with increasing the number of iterations to account for a net increase in scope… when the product owner added a net 6 points to the backlog everyone could get why the number of iterations had increased by 1.

Informing and Shaping Conversations for Performance

Looking back its clear that something had shifted for the team… it was like everyone’s attention was now focussed on managing to minimise the dollars per story point. Managing for investment performance was a new context from which to operate and organise our thinking as a team.

And so when the team met at the third iteration review managing for investment performance was available as a context for interpreting the most recent performance. When we entered the numbers all the key performance measures for the iteration were impacted…

  • Dollars per story point tripled
  • Velocity was down by two-thirds
  • Estimation variance also dipped

Informing and Shaping Conversations for Performance Improvement

As a team we could readily see the impact of committing to 3 stories, gong after 4 and delivering 2 on the performance measures. Which lead to the next question… What has been the impact on the overall project timeline and the budget and is this drop in the performance measures indicative of things to come?

The team put the drop in performance down to not paying enough attention to the conditions of satisfaction that the product owner had put into the stories… something that could be readily addressed in the retrospective as a lesson learned.

A Retrospective Free from Distractions

So when it came time for the third Retrospective the team got to engage in looking at what worked and what didn’t free from distractions about what had happened and what it meant for the project. And one iteration later…

…a very happy Product Owner takes delivery of the Initial Release, clear that the project delivered on time and within budget. At a company dinner a few weeks later colleagues listen attentively as team members share enthusiastically what it’s like to “really do Scrum”.