Judging Project Progress
Typically progress is judged by counting deliverables as part of an earned value calculation. Unfortunately deliverables are only one indication of progress and not that reliable. All to often as a project nears completion a green project judged 90% complete turns to amber or red as integration testing or early adopter deployments start to confront reality and a project slides back to 70% complete.
I’m going to drill down into two other techniques that I favour as a complement to earned value reporting for IT projects.
The first technique is rarely used, but in my view is universally relevant and very powerful is to assess progress by asking key stakeholders to write down statements that they expect to be true at the milestone being assessed. Then the stakeholders should determine their confidence level concerning the statements being true by interviewing the development team reviewing documents and testing software. Again an example should make this clear.
Consider a team developing a new multi-tenant virtual machine infrastructure with automated provisioning, monitoring, usage tracking and billing, different stakeholders will have different statements, after the milestone review the sales team might say:
- We understand the target market for the service with 100% confidence
- We understand how we will differentiate the service with 50% confidence
- We understand how the sales process will work with 100% confidence
- We understand the cost and pricing model with 30% confidence
The operations team might say:
- We understand how we will deploy the service with 100% confidence
- We understand how we will test the service with 79% confidence
- We understand how we will operate the service with 60% confidence
- We understand how we will on-board new customers onto the service with 50% confidence
A refinement of the above would be to make the statements as follows:
We understand how we will test the service with 79% confidence, we expected to be 90% confident at this point in the project.
I’ve shown the above in text but in reality it would be shown as a horizontal stacked bar chart. Note that each stakeholder should have supporting evidence for their views, which they would discuss with the development team. The development team then knows where they need to focus to build the confidence of their stakeholders.
Both of these techniques are a complement to traditional progress tracking, but in my view they each provide a way to make it much more robust. There are some particular benefits to the stakeholder assessments:
- They allow progress to be visualised in business terms
- They allow stakeholders to clearly communicate upwards and to the development team
- At each milestone review progress can be tracked visually, sometimes confidence will reduce as the development progresses and the stakeholders learn more
- Assessing confidence levels requires conversations between the reviewers and the development teams, these conversations are much more valuable than passing hundreds of review comments and responses back and forwards
- Different stakeholders get an appreciation of what’s important to each other and of their respective view of progress
- Discussion between stakeholders is very likely. For example one group might believe they have a good understanding of the cost of the service, but another might disagree, exploring why might be very revealing
The second technique which is gaining considerable support is to structure a project into units of work that can be completed and tested independently of each other. Ideally these deliverables provide useful value in their own right, if these deliverables can be used by the project team or customers all the better. An example will help. Consider the same hosting service described above.
- Step one would be to start by building just the virtual machine hosting layer
- The team would then start using the hosting layer to develop and test the provisioning service.
- Once the provisioning service was running they would use that to provision the test VMs for the monitoring service, which would monitor the provisioning service and the hosting service
- Once the monitoring service was running they would use that to test the reporting service
- Once the reporting service was running they would use that to test the billing service
The deliverables from each stage are used in creating the next stage, because they are being used they are being tested. The hosting layer doesn’t need to be complete before it’s used to develop the provisioning service, it just needs to have enough capability to be used by rick tolerant developers. As new capabilities are added daily/weekly they get tested through use.
Using the process described above, integration of new capabilities happens continuously and is tested by the team developing the service both formally (by independent testers) and informally through usage. Because the service being developed is in continuous use the developers have SOME real world experience to base their assessment of progress. I stress SOME because this is no substitute for formal testing that can address issues like stress testing better than day to day usage by a small development team can.