Flaw of sprint capacity allocation
I’m dealing with this problem during a long time ago, in order to feel comfortable, building stable systems and trying to be predictable. I though I was moving forward using Lean mindset (taking into account flow, avoid queues and waste), using empirical data, number of user stories done, Monte Carlo simulation, #NoEstimates, etc.
But today, I’d like to talk about what happens if you calculate sprint capacity using availability, I though people stopped doing this kind of Project Management approach inside SCRUM but I realized that is alive and kicking.
How does this kind of approach works? Easy peasy: using story points as “ideal development hours” and calculate capacity using people availability (in hours) less 20% because of time at SCRUM “ritual” (daily, retrospective, review, planning, refinement). You might ask what is the problem here? Well, they are barely finishing 50% of its sprint commitment, because as H.L. Mencken said “for every complex problem there is an answer that is clear, simple and wrong.”
I’m going to expose my humble point of view based on four points:
- - Capacity calculation: Using this approach of availability is not a good point because: where is all the uncertainties related with people time? Other meetings? Time well spend helping a teammate, pair-programming, etc… And this could even go worst, if you play Tetris resources (as they do), where people is treated as a resource and are 50% or 25% enrolled for this specific project, this add more uncertainties and complexity but you are treating it as simple.
2.- Flow: Where is reflected the time a user story is waiting? Or bottlenecks effects? Yes, I’m talking about queues, there isn’t a queue availability calendar or there isn’t a capacity calculation for them! Also keep in mind that, the more user stories (and bigger) you have the more odds they will get blocked by internal or external issues ( seems we’ll also need blocking availabilities for our sprint capacity plan). Seems that our bucket of time is smaller than what we were expecting!
3.- Buffer availability: It’s not a cross-functional team, there are developers and testers. So if developers “develops” 100% of their availability, this means that they’re developing till the end of the sprint, so… no QA time for testing (this build big queues as I said at point 2). And if QA guys are not helping developers from the very beginning of the sprint then, we have fake availability for testing till they start receiving user stories released from developers.
4.- System Stability: one important point of SCRUM is that thanks to sprints you have a moment with zero work in progress, then you have a stable system where you can apply Little’s law and be more predictable. If you never finish user stories you will have a complex system (something hard to predict) and could it even be worst having a chaotic system (impossible to predict), bad news for almost any stakeholder, isn’t it?
I think that using this kind of capacity is not a good deal, because you will be both much optimistic and will build an unpredictable system. From that point on, I think we should start using throughput, not just story points, this could be useful for building models like Monte Carlo, in order to know how your system behaves and being data-drive to improve estimations and team capacity. To do that, I’ll recommend splitting user stories to be as equal as you can, in terms of complexity and time consuming, and doing them as small as you can to build a more flexible system.
Another point of view, is if you are not finishing your sprint commitment is because you should improve your throughput, and to do that you have to reduce lead time (if you keep work in progress), maybe improving some bottlenecks or building a cross-functional team could also helps you with that. For the moment, I’ll avoid adding more people to the team, because this will increase capacity but also work in progress so you will be again at the same point
I have the take “flow of…” from the awesome book written by Sam L. Savage called “The Flaw of Averages:Why we underestimate risk in the face of uncertainty” :)