Month: October 2010

Next Generation BPMS

Posted on

When reading the 2010 Forrester Wave Report for BPMS there were a couple of take aways for me, but the main one being the considerations for what is required in a BPMS.

One titled ‘Next-Generation BPM Suites Empower Process Owners, Business Users, And Customers’ and the other on how to select a BPMS (‘Which BPM Suite is right for your initiative’).

In the first the key features that were discussed were:

  • Social and Web 2.0 components encoaraging collaboration through the discovery, design, and development
  • BPM as a service (in the cloud)
  • Data Quality
  • BPMN 2.0 helping to standardise modelling notations across tools within large organisations – hence allowing better end-to-end understanding of processes regardless of the toolset in use by different departments.

This lead into the discussion on how to select a BPMS.

If the business plans to drive process transformation, you will want to select a vendor that has strong collaborative process design, prototyping, and shared development capabilities.

So for me we seem to be missing a big part of process analysis – which is about discovery / analysis for processes that have been implemented. What about BAM, simulation and optimisation? Identification and prioritisation of improvement opportunities? The promise of BPM is based on implementing models of continual improvement. Not about one off process implementations.

Not many products seem to actually support the whole life cycle or at least not in a circular fashion. It should be discovery, design, execute, monitor, discovery, design etc.

As a decision maker (assuming a business led initiative based on the promise of continual improvement), I would be keen to know about the products abilities to monitor, identify issues, and feed this back into the analysis phase of the cycle.

In my experience BPM initiatives often turn into BPMS implementation IT projects, and complete at the end of one big business case. This may have delivered one or more process improvements iterations, but has failed to leave the organisation in a place that can deliver continual improvement to their processes.

Am I missing something or is this just a symptom of the levels of maturity we are at in the BPM space, and hence our requirements for a BPMS?

Taking a BPM approach to BPO

Posted on

Centrum recently attended the first meeting of the soon to be established Australian Business Process Outsourcing Association (ABPOA). We approached the concept with tentative interest, as we were not entirely sure how our approach to BPM would align to the goals and aspirations of the BPO association. It emerges that there are actually some very strong linkages between the two concepts and we thought we would share some of our initial thoughts.

BPO is not a new concept but it does appear to be gaining some renewed traction as organisations look for the next wave of efficiency savings. Clearly, not all processes are candidates for outsourcing. It’s really the non-differentiating processes for which there is likely to be a business case. But how do organisations decide which processes to outsource and which processes to continue to manage themselves? What sort of information would help to better inform this decision and does BPM have something to offer here?

We think BPM will help and in more than one way. Firstly, a functioning BPM capability within an organisation enabled by BPM technology will provide companies with the sort of information they need to make an informed decision. This will include things like process cycle times, frequency and cost in addition to quality measures such as number of exception path executions. A set of modelled process, using a standard notation such as BPMN, will also greatly assist the outsourcer understand how the process currently works and it what way it will change when it is outsourced. In many ways a function BPM capability with an organisation looking to outsource processes is a pre-requisite to outsourcing and this is a view held by one of the more prominent BPMS vendors, Progress Savvion.

For BPO organisations the case for BPM is even more compelling. The reputation of their business depends on their ability to successfully manage the process being outsourced. They also need to provide added-value, whether it is running the process at a cheaper price, more efficiently or at a high level of quality. For these organisations, the processes are differentiating processes and as such the performance of these processes and the value they provide to clients is what separates them from their competitors.

So, a checklist for organisations looking to outsource processes:

  1. Ensure all your business processes are modelled and properly measured
  2. Decide which processes are candidates for outsourcing based on the information you have captured
  3. Identify an organisation that has a reputation for delivering value to organisations in the target process space
  4. Clearly identify the hand-off points between you and the outsourcer.
  5. Agree to a set of measurable SLAs and KPIs
  6. Work with the outsourcer to continually improve the process and ensure the arrangement works effectively for both organisations
  7. Regularly evaluate the benefit of the outsourcing arrangement and ensure you are getting value for money

And for outsourcers:

  1. Employ a continual improvement approach within your organisations to ensure your processes continue to deliver added value to your customers
  2. Use BPM technology to assist you measure your process and to automate the hand-off points
  3. Provide your clients with clear, frequent and comprehensive reports to illustrate the performance of your processes

A focus on continual improvement for both organisations is critical in maintaining the success of the working relationship. Outsourcing is traditionally not a cheap exercise but is the overall cost can be if both organisations practice BPM and use leading technologies to support them in this endeavour.

Implementing Maven in a Legacy Code Base

Posted on

It’s your first day with a new company or client. The usual things happen; you get introduced to everyone, shown where the toilets and fire stairs are located, pointed towards your desk, allocated a PC and login to the corporate network. Everything is going fine you login successfully, email works and you start to configure your PC to actually get some work done.

You install the Java IDE (in my case Eclipse) and get the URL for the source code repository. You check out the many modules that make up the application and start trawling through the code looking for hints on how the modules are built, pom.xml …, nowhere to be seen. Hmm, build.xml …, ah there you are, OK run the Ant build, FAIL, class not found, class not found…..

You look to the IDE for guidance, red crosses everywhere, same problem; there are multiple dependencies that are not in place (as far as the IDE is concerned). You figure I will discuss this with someone who is more familiar with the code base and application structure. What you find is that the application and module structure is tightly bound to that person’s IDE and is hidden within IDE metadata files. Worse still these metadata files are actually checked into the source code repository complete with hard coded locations to a particular person’s PC.

Sounds familiar? Dealing with a legacy code base that has grown over the years can be very difficult. In some instances the knowledge of the application is with one or two key staff members. Those people may have architected the application on the run and not followed industry standards. They may no longer be with the company. You’re left to work it all out, where do you start?

If you have a legacy code base that has multiple dependencies and is currently built via Ant, you can implement Maven within this code base. Here are some tips that may prove helpful.

1. Baseline

  • Get the application and all of its modules to a known working state.
  • Ensure that you understand the dependencies between modules and to 3rd party libraries.
  • Ensure that you can successfully build the application via the Ant build scripts.
  • Ensure that you can successfully deploy and run the application.

2. Create POM files for each Application Module

Create a pom.xml file for each of the dependent application modules. Ensure that it contains:

  • A &ltparent&gt section to provide the details of the parent POM.
  • A &ltrepository&gt section to provide the details of the enterprise remote repository.
  • A &ltdependency&gt section to provide the details of the installer provider. This will be required for step 3 (in my case I used wagon-webdav).
3. Deploy Application Modules to the Enterprise Remote Maven Repository

Modify the Ant build scripts of each of the dependant application modules to deploy the resulting artefact to the enterprise remote Maven repository. This can be achieved by using the Maven Ant Tasks ( The important points to remember are:

  • Ensure that you have a reference to the Maven-Ant task and that the file maven-ant-tasks-*.jar is on the classpath.
 <!-- Creates a classpath element for the Maven-Ant Task ( -->   <path id="maven.ant.class.path">        <fileset dir="${maven.ant.lib.dir}">             <include name="*.jar" />        </fileset>   </path>   <typedef resource="org/apache/maven/artifact/ant/antlib.xml"        uri="antlib:org.apache.maven.artifact.ant"        classpathref="maven.ant.class.path" />  
  • Create a classpath element that points to the dependencies within the pom.xml file create in step 2. This can then be used when compiling the code.
  •  <target name="initDependencies">        <artifact:dependencies pathId="maven.dependency.classpath">             <pom file="${project.dir}/pom.xml"/>        </artifact:dependencies>   </target>  
  • Create a deploy target that refers to the pom.xml file from step 2 and the correct enterprise remote repository for deploying artifacts.
  •  <target name="mvnDeploy">        <!-- Refer to the local pom file -->        <artifact:pom id="projectPom" file="${project.dir}/pom.xml" />        <!-- Defines the Remote Repository -->        <artifact:remoteRepository id="inHouseRepo" url="${maven.deploy.repository.url}">             <releases enabled="true"/>             <snapshots enabled="false"/>             <authentication username="${maven.deploy.repository.username}" password="${maven.deploy.repository.password}"/>        </artifact:remoteRepository>        <artifact:install-provider artifactId="wagon-webdav" version="1.0-beta-1"/>        <!-- Deploy the artifact using the new pom file to the Suncorp in house repository -->        <artifact:deploy file="${project.dist.dir}/${}.jar">             <remoteRepository refid="inHouseRepo"/>             <pom refid="projectPom"/>        </artifact:deploy>   </target>  

    These application modules will now be stored in your enterprise remote Maven repository conveniently available for a Maven build.

    3. Create a Maven Project
    Depending on the architecture of your application you can either;

    • Create a new top level module as a Maven project
    • Convert the current top level module into a Maven project

    The key points with this activity are to;

    a. If you are converting ensure that you follow the directory structure require by Maven (

    b. Ensure that your new pom.xml contains all of the dependencies for the application. For the 3rd party libraries this should be a matter of digging through the various modules, finding the currently used 3rd party JAR files and adding entries to the pom.xml.

    For application modules create a entry for each module as follows:

     <dependency>   <groupId></groupId>   <artifactId>module-1</artifactId>   <version>1.0</version></dependency> 

    4. Test

    • Package the application via Maven (i.e. mvn clean package).
    • Inspect the resulting artefact.
    • Does it match the baseline artefact created successfully in step 1?
    • Does it run successfully?

    5. Repeat

    Now convert the next highest application module to a Maven project following the steps outlined above.

    Process Discovery: Whiteboards are cool but is there a tool?

    Posted on

    Running a successful process discovery workshop is a dark art, it’s not everyone’s cup of coffee. It requires an ability to grasp concepts quickly, patience and most importantly a strong will – to keep at the participants on track. It’s not just a matter of standing in front of a whiteboard and drawing some pretty shapes with lines between them you need to control the flow of information, separating the important stuff from the not so important stuff and keep everyone actively engaged for often days at a time. One of the hardest things to control is the level of detail and it takes a certain tact to delicately move the group on when the workshops descends into a discussion over whether Bill or Jane currently approve a particular activity or whether the average cycle time for the approval process is one hour or one hour and five mins.

    In my experience, a whiteboard, non-permanent markers and a wad of sticky notes works just fine but you need a room with lots of blank walls, a good swivel chair and a rubber neck. There are now stacks of software programs about that claim they can assist you in workshop world but I’m still yet to see one that does everything I want it to, albeit they are getting close.
    So, in order of preference (perhaps controversially,) what are the key things I believe a tool needs to support:

    1. Flexible data capture – quite simply I want to be able to capture data at every level: organisation, process, activity and task. Preferably, I’d like to be able to configure the types of data I can capture and not be constrained by predefined fields.
    2. Process modelling – obviously you need modelling support and I don’t think there will be a tool out there that doesn’t allow you to draw pictures. However, just providing support for modelling is not enough. Ideally you want a tool that supports the standard notations in this space: Business Process Management Notation (BPMN) and Event-drive Process Chains (EPC). The usability of the tool is absolutely critical, the last the you want during a workshop is to be messing around trying to join two activities together – participants will quickly switch off if more time in spent in the tool than facilitating the workshop.
    3. Documentation – by documentation I mean more than just a PDF with pictures. Ideally, the tool will be able to generate a PDF that outputs diagrams nested amongst all the data you have captured and potentially the output from any simulation.
    4. Process landscaping –the level above processes. Ideally, you would be able to model the process landscape as a hierarchy showing where in the organisation processes reside and how processes are divided into sub-processes
    5. Simulation – I appreciate I’m potentially stepping into process design with this one but at the point we start mapping the processes in detail I’d like to be able to simulate the process based on what I have captured. This doesn’t need to be a bells and whistles simulation I just want to be able to quickly assess where the possible bottlenecks are and start getting a feel for the costs and cycle times for each step.
    6. White boarding – in my opinion it’s not a good idea to start modelling the process too early. Initially, you just want to be able to capture the key steps in a process and the key activities that reside within each of those steps. Sequence is also important but your best to focus on the happy path only and park the exceptions for a later iteration. To support this approach you need a tool that allows you to capture steps and activities in an unstructured manner, as you would on a whiteboard.
    7. Web browser support – ideally the whole tool would be browser based but at a minimum I’d be happy if the tool just supported the ability to publish a project in a HTML form so that it is viewable in a browser.

    So what are your priorities and what’s out there? I think it’s fair to say that no one tool does it all. So, if there isn’t one tool then ideally I liked seamless transition from one tool to the next, with limited data loss and minimal effort.

    Sometimes paper is just better

    Posted on

    Being part of a new environment and meeting new people gives us a different perspective on things.  Let me tell you how I rediscovered paper (as well as handwriting).

    I have been practising Scrum and Scrumbut for a few years now and I always had an online tool to help along.  Being guided by the “Go Paperless” slogan, I assumed that if it was online, it was simply better.  In many cases, I can safely say that online is better but I recently rediscovered the joys of paper.

    In my current project, where Scrum (and not Scrumbut) is practised, we started out by maintaining the product backlog, iterations, stories, tasks and burn-down charts in JIRA.  The results were quite satisfying, especially since we have access to Geenhopper (Agile tab).  But updating the task board during the stand-ups felt a bit unnatural and broke the flow somewhat.

    In the next iteration, our excellent agile coach, Alex Bould, suggested we maintain a paper version of our stories, tasks, burn-down charts and so on.  Being the open-minded and easy-going people we are, we gave it a go.  The experience was surprisingly positive.  The action of taking a card off the wall and putting it “in process” gave me an extra sense of ownership with regards to that card.  I was not only reading it, I was also touching it.  Maybe using more senses had a similar effect to enhanced understanding when reading something out loud.  Furthermore, writing tasks on cards (making it legible took a few tries), adding up quarter-days and hand drawing the burn-down chart engaged my team just a little bit more than the online version did.  Finally, being able to see the task wall along with the burn-down chart whenever we are in the office gives us a good sense of progression, not to mention that it is easier and cheaper to set up then higher technology versions.

    An iteration later, we decided to go back to the online version to see how it compared with our new experience. Once again, the experience was good, but in the spring retrospective the team unanimously voted to revert back to the good old paper way, and we have not looked back since.

    Disclaimer: It goes without saying that a few preconditions need to exist in order to adopt the paper way.  Having access to a large wall is essential and having a co-located team really helps.

    Environment Management in the Cloud – The whole kit and caboodle

    Posted on

    This is the first in our “living in the clouds” series of posts. Over the past months we have moved from a traditional infrastructure, hosted on our own physical servers to a cloud based infrastructure. This is our story.…

    In this post I’ll talk about what has been the most challenging (and possibly rewarding) part. That is having cloud based infrastructure for our own development environments, supporting development infrastructure and the production environment for our own software. As we worked through the problems, we have come across some great tools and techniques for managing our environments.

    Recreating our entire infrastructure – on demand (and every night)

    Making the decision to decommission our servers and go to the cloud gave us the opportunity to re-architect our infrastructure from scratch. What we wanted to do was ensure Software Configuration Management (SCM) was central to our solution. How could we be sure we had achieved this?By deleting and recreating our entire infrastructure from version control every day.So every night we delete all our Amazon EC2 instances. Our servers (currently 11 Amazon EC2 servers) are then created from the ground up. That is, from a base Amazon AMI, we bootstrap each client using puppet (a fantastic data centre automation tool).  Puppet takes over and sets up the servers based on the version controlled “recipes” – definitions of the desired state of each piece of infrastructure. The below picture shows the basic steps in this process:

    Nightly Provisioning Flow

    Why bother?  Here are 5 good reasons…

    Getting to this stage has involved a fair amount of effort.  We have had to learn new tools, develop new processes and techniques and also write an application that could provision EC2 instances via the Amazon API.  This is why we think it is well worth the hassle:

    1. No more debugging configuration issues – ever. “Manual” configuration in our experience is the cause of a huge amount of application downtime and debugging.   Making changes outside of the version controlled process has become an impossibility.  We can still however test and roll out changes in a matter of minutes in a controlled and reversible manner.
    2. A side benefit, but a good one – we don’t pay for our EC2 instances overnight when we don’t need them.
    3. The marginal cost of expansion is never a new server – as opposed to managing our own physical servers.  We never hit the situation that when running at near capacity, the cost of increasing capacity by a couple of percent is going out shopping for a new server.
    4. Lot’s of practice means less excitement – creating new environments is a process that is in effect practised every day.  No more spending a week hand crafting a new environment based on what you think production is like – probably.  We can get out a new environment built from bare metal up in under 10 minutes.
    5. Change of focus – we no longer think in terms of servers or hardware, but services and components.  This is a big shift in mindset that leads to flexible and innovative solutions to problems