When a sprint misses, the maths looks linear. You had 13 story points of value planned. You hit 11. You're 2 points short. You'll ship it next sprint. Problem solved.
This is the accounting view. It's wrong. The economic view is different — and it costs you money in ways that never appear in the sprint report.
The cost of delay isn't linear, it's exponential
Don Reinertsen wrote something that should be tattooed on every product leader's forehead: "If you only quantify one thing, quantify the cost of delay." Not effort. Not risk. Cost of delay.
Here's why a missed sprint is worse than it looks: When you planned that feature, you planned it because something was true at that moment. A customer had a problem. A market window was open. A competitor was absent. Those conditions don't wait.
If you ship six weeks late instead of two weeks, you don't lose four weeks of value. You lose the value from the entire operating condition you were working in. The customer solved the problem differently. The market moved. The competitor shipped their version first. The business case dissolved.
That's cost of delay. It compounds the later you get. And most organisations don't calculate it at all.
Why sprints slip — and why you're probably not measuring the right thing
A sprint misses for a reason. Sometimes the team underestimated (they did). Sometimes there was unplanned work (there was). Sometimes external dependencies blocked progress. Rework. Bugs in production. Somebody left.
The problem is that once you understand the reason, it's too late. The slip happened. The cost of delay accrued. You can't get it back.
The leaders who move faster don't miss fewer sprints because they estimate better. They miss fewer sprints because they measure predictability aggressively and intervene early. They answer this question every week: "Are we on track? If not, what's blocking us, and when can we remove that blocker?"
Most organisations answer this question on Friday. By then, the week is gone.
The cascading cost: how one missed sprint breaks the forecast
You miss a sprint. That feature ships the next cycle instead. Suddenly, the backlog shuffles. Priorities reprioriotise. The next feature now depends on the first one, but the first one is delayed, so the second slips too.
That's the direct cost. But there's an indirect cost too: your forecast becomes fiction. You told the board shipping would happen in Q2. Now you're telling them Q3. But you're not confident about Q3 because you've already missed once and the pattern is starting to feel familiar.
Engineers see this. They see promises get broken. They see the backlog shift. They see the calendar lie. Good teams respond by shipping smaller batches more frequently — cutting scope to make the date. Mediocre teams respond by burning out. Bad teams respond by becoming cynical and stopping to believe anything the leadership says about deadlines.
Quantifying the cost matters, even when it's uncomfortable
Most organisations can't answer this question: "What's the business impact of shipping this feature one month late?" They have no framework for it. They assume it's bad, they move on.
"If you only quantify one thing, quantify the cost of delay. Every other metric is a proxy for what actually matters: speed to value. Not effort. Not velocity. Speed to value."
Don Reinertsen · Flow Systems
The organisations that are winning do this calculation. They understand that delaying a feature that would capture 5% market share is worth more than the cost of crashing a sprint to ship it. They understand that shipping a reliability fix a week early saves them 200 customer support tickets.
The calculation doesn't have to be perfect. It just has to be real. And the moment you start calculating cost of delay, your priorities shift. Some features suddenly look cheap to delay. Others look expensive. And you start protecting the ones where delay costs the most.
What to actually do: Predictability beats heroism
The high-performing teams don't miss sprints because they've learned to do three things differently:
First, they measure predictability relentlessly. Not velocity. Predictability. "We said we'd do 40 points and we did 39" is a win. "We said we'd do 40 and we did 27" is a diagnosis question. By the sixth sprint, teams that measure predictability look different — they build buffers, they descope aggressively, they communicate early.
Second, they eliminate unplanned work through triage. Not all bugs are equal. A production bug that's affecting users gets fixed. A bug in a low-traffic feature gets backlogged. A cosmetic bug gets logged for later. When you have the discipline to triage, unplanned work doesn't blow the sprint.
Third, they intervene early. By Wednesday morning, they know if the sprint is at risk. They don't wait until Friday retro to find out they missed. Early intervention means small changes. A conversation with the product owner. A quick descope. Not a full sprint failure and a rescheduled roadmap.
The cost of delay doesn't disappear. But when you ship on time, even with less scope, you capture the value. When you miss, you lose it. That simple trade-off should reshape how you manage your sprints entirely.