Agile Myths: Predictability Lost
There seems to me to be a fairly common view that an Agile approach to software development is less predictable than a traditional (waterfall) method. Steve McConnell's comments in his post on the business impact and business benefits of Agile are typical of this sentiment:
"True agility - which means adopting a posture that allows you to respond rapidly to changing market conditions and customer demands - conflicts with predictability. Some businesses value agility, but many businesses value predictability more than they value the ability to change direction quickly."
In practice I would question how much predictability traditional methods offer. For example, PRINCE2 requires an early up-front estimate to be produced for how much effort it will take to complete the project. The estimate is required both to ensure the business case is valid (and hence allow the project to proceed) and for planning the work. However, PRINCE2 provides no specific guidance on how to actually estimate the work.
And unfortunately, in software development, it is not possible to estimate the effort up-front with great accuracy. In Software Estimation: Demystifying the Black Art, Steve McConnell explains the concept of the estimation Cone of Uncertainty. This shows that, even once detailed technical requirements have been fixed, the best possible estimate will only have an accuracy of plus or minus 25 percent. That means a project estimated to take 2 years and cost £3m could actually take anywhere between 1.5 to 2.5 years and cost between £2.25m and £3.75m (note: actually the accuracy for elapsed time and cost would be even worse because deriving these from effort estimates introduces further variability). And Steve McConnell is quite clear that you cannot estimate more accurately than this:
"An important and difficult concept is that the Cone of Uncertainty represents the best-case accuracy that it is possible to have in software estimates at different points in a project. The Cone represents the error in estimates created by skilled estimators. It’s easily possible to do worse. It isn’t possible to be more accurate; it’s only possible to be more lucky.”
And the experience of my team certainly supports this. Small changes take an average of 30 days of elapsed time to go from being selected from our back-log to being released into production, but the variation is large: from 7 to 84 days.
Not only is it impossible to accurately predict the cost and duration for a traditional project, it is also very hard to accurately measure progress. As Jolt Award winning author Johanna Rothman once told me:
"A serial lifecycle (waterfall or phase-gate) is the hardest lifecycle to try to predict anything about. With a serial lifecycle you don’t have any real knowledge about the system until integration and test. You have no meaningful data until integration and test. All the data you have is surrogate data. So you have the least amount of predictability until the end of the project."
In other words, you might, on a traditional project, think that you are half way through coding. But since you have no idea yet of the quality of the code, whether it works nor whether it meets any of the initial requirements, using this as a measure of progress is risky.
Now compare this with Agile. Teams using Scrum generally estimate the size of the software using story points or ideal days. They then use their velocity, which is how many story points they can complete in a given time, to estimate completion dates. How does the team derive their velocity? Usually by measuring it, as described by Henrik Kniberg in Scrum and XP from the Trenches:
"One very simple way to estimate velocity is to look at the team’s history. What was their velocity during the past few sprints? Then assume that the velocity will be roughly the same next sprint. This technique is known as yesterday’s weather."
With 2 week sprints it doesn't take long to build up a good picture of a team's velocity. Using 2 week sprints also gives you a far more accurate measure of progress because, rather than using surrogate measures, you are measuring how much working software has been delivered into the hands of real users.
Kanban takes a slightly different approach by measuring, for each task, how long it takes (in elapsed time) to go from being selected from the back-log through to being completed (where completed should ideally equate to "being in the hands of users"). In a relatively short period of time you build up a picture of how long it typically takes for a task to be finished. For example, I know that my team delivers 80% of small changes within 50 days. That's probably a high enough level of certainty to make commitments and plan but, if I need more, I also know that 90% of small changes are delivered within 68 days. By combining this data with throughput measurements it is possible to predict, with confidence, how long it will take to complete a number of work items. As with Scrum, progress measurement is accurate because the measure being used is "working software in the hands of users".
So, when comparing traditional methods with Agile, I don't think it is true that you exchange predictability for agility. Instead I think that you exchange an imagined sense of high predictability for a real (albeit slightly lower) level of known predictability. Furthermore, by measuring progress more accurately (with meaningful rather than surrogate data), you are better able to predict throughout the project whether or not you will be able to hit a deadline. And because Agile methods lower the cost of change you can make choices (even late on in the project) about whether to deliver the original scope and accept a schedule slip, or reduce the scope but achieve the target completion date.