What Are DORA Metrics, and Why Should You Care?
by Gary Worthington, More Than Monkeys
If you’ve worked in software delivery for any length of time, you’ve probably been dragged into a meeting about performance metrics. Someone wants to understand which teams are “performing best,” or why delivery feels slower than it did last quarter, or whether DevOps is actually making a difference.
And eventually, someone mentions DORA.

DORA metrics are everywhere now. They’re in every platform, every dashboard, every transformation initiative with a slide deck and a budget. But the way they’re introduced and interpreted often misses the point. Teams either misunderstand them, misuse them, or plot the numbers without ever asking the right questions.
So, what are DORA metrics actually for, what do they measure and how do we use them in a way that’s useful, without turning them into another blunt management tool?
Where Did DORA Metrics Come From?
DORA stands for DevOps Research and Assessment. It was born out of several years of research into what makes software teams high-performing, led by Nicole Forsgren, Jez Humble, and Gene Kim. If you’ve read Accelerate, this is where it all comes from.
They studied thousands of engineering teams and found that four key metrics consistently correlated with better delivery performance - not just in terms of output, but in terms of business success, team health, and stability.
These became known as the DORA metrics.
The Four DORA Metrics
Here’s what they are, what they measure, and what they actually tell you:
1. Deployment Frequency
How often does your team deploy code to production or release it to users?
- Why it matters: Teams that deploy frequently are more iterative, more adaptable, and better at responding to feedback.
- What to watch for: A team deploying once a month is likely working in large, risky batches. A team deploying daily is keeping scope tight and feedback loops short.
2. Lead Time for Changes
How long does it take for a code change to go from commit to production?
- Why it matters: Long lead times signal friction in the process . Think manual testing, slow reviews, poor CI/CD, unclear ownership.
- What to watch for: Faster lead time means faster learning. The best teams make small changes and get them live quickly.
3. Change Failure Rate
What percentage of your deployments cause failures in production (bugs, rollbacks, hotfixes, incidents)?
- Why it matters: Shipping fast only works if it’s safe. If deployments regularly cause chaos, you’ve got a quality problem.
- What to watch for: High failure rates suggest missing test coverage, rushed releases, or poor observability.
4. Mean Time to Restore (MTTR)
When something goes wrong in production, how long does it take to recover?
- Why it matters: Incidents happen to the best of us. What matters is how quickly you detect and resolve them.
- What to watch for: High MTTR often points to brittle systems, unclear ownership, or weak incident response.
What These Metrics Actually Show
Together, these metrics give you a picture of how healthy your delivery pipeline is.
- Deployment Frequency and Lead Time for Changes show your tempo. How quickly can the team respond to new information?
- Change Failure Rate and MTTR show your resilience. How well can the team recover from problems?
That combination is what makes them powerful. You’re not just looking at speed, but at speed with safety. You can’t optimise one without considering the other.
How to Measure DORA Metrics in Practice
Most modern platforms (GitHub, GitLab, CircleCI, Azure DevOps, and so on) can help you extract these metrics, but you’ll still need to define what counts in your context.
Here’s how to approach it:
Deployment Frequency
Track successful production deployments. If you have multiple environments, be clear what “live” means. Don’t just count every CI build.
Lead Time for Changes
Measure from the first commit (or PR open) to the time that code is deployed to production. This can be noisy if you don’t squash commits or if WIP sits idle, so you may want to track PR cycle time as a proxy.
Change Failure Rate
This one is tricky. It relies on tagging incidents or rollbacks to specific deployments. If you’re not doing that already, start simple: log each time a deployment causes a production issue, and track the ratio.
Mean Time to Restore
Track the time from incident start to full resolution. You’ll need an incident management tool (or a shared doc at the very least). Don’t try to automate this too early; start with manual tagging and retrospective reviews.
Using DORA Metrics Alongside Other Agile Metrics
DORA gives you a view of delivery health, but to get a rounded picture of team performance, you’ll want to pair it with other metrics that track flow, predictability, and team dynamics.
Here are some examples that complement DORA well:
1. Cycle Time and Flow Efficiency
- Cycle Time shows how long it takes for a ticket to go from “in progress” to “done.”
- Flow Efficiency breaks that down into time spent actively worked on vs time spent idle.
These help you understand internal bottlenecks, handover delays, or slow feedback loops.
2. Throughput and Work Item Age
- Throughput tracks how many tickets you complete per week.
- Work Item Age tells you how long current tickets have been open.
These metrics help forecast future delivery and identify aging work that’s stuck or deprioritised.
3. WIP Limits and Context Switching
Track how many stories are in progress at once. Too much WIP means too much multitasking, which increases lead times and reduces focus. This is where you start to expose delivery friction caused by unprioritised work or excessive scope juggling.
4. Bug Rate and Escaped Defects
- How many bugs are being raised by customers or QA post-release?
- Are we fixing root causes, or just firefighting?
These connect the DORA quality signals with actual user experience, and help tie delivery speed back to business confidence.
5. Team Sentiment and Qualitative Signals
Surveys, retrospectives, and check-ins are just as important.
Use these to get context behind the numbers. Metrics can tell you what happened, but only people can tell you why.
When you bring all of these together, you stop chasing individual numbers and start seeing the full system; how work flows through it, where it gets stuck, and how the team feels doing it.
Common Pitfalls to Avoid
Like any metric, DORA can be misused. Here are some patterns to watch out for:
Turning Metrics into Targets
The second you say, “Teams must deploy daily,” or “MTTR must be under 1 hour,” you incentivise the wrong behaviours. Teams will optimise the number, not the outcome.
Measuring Without Acting
Plotting these numbers in a dashboard is not the work. The real value comes from asking why the numbers look the way they do and doing something about it.
Ignoring Context
Different teams have different constraints. A mobile team waiting on App Store approvals will naturally have lower deployment frequency. A team owning legacy systems may have longer lead times. The metrics should inform your thinking, not standardise everyone to the same pace.
Using DORA to Drive Better Conversations
The biggest value in DORA metrics is this: they help you ask better questions.
- Why is lead time going up? Is it reviews? Build failures? Waiting on design?
- Why is deployment frequency falling? Are we bundling too much work together?
- Why is MTTR so high? Do we have enough observability? Are incidents being documented properly?
Use them in retrospectives. Use them in squad health checks. Use them to support conversations about investment, tech debt, and continuous improvement.
What Good Looks Like
You don’t need perfect numbers to be a high-performing team. In fact, obsessing over benchmarks will often slow you down. Focus on direction, not perfection.
A healthy team will:
- Deploy small, valuable changes regularly
- Recover quickly when something breaks
- Learn from incidents instead of hiding them
- Use metrics to improve, not to defend themselves
That’s what DORA is really about. Not a performance scoreboard, but a set of feedback loops. A way to see delivery as a system, not a factory.
In Summary
DORA metrics aren’t just another agile buzzword. They’re a research-backed, practical way to understand how well your teams are delivering and where they need support.
If you use them thoughtfully, they can help you shift the conversation away from story points & velocity, and towards flow, resilience, and continuous improvement.
Just remember:
- Don’t weaponise the metrics
- Don’t chase benchmarks
- Focus on learning, not judging
Good teams talk about delivery. Great teams talk about how to improve it. And that’s where DORA really earns its place.
Gary Worthington is a software engineer, delivery consultant, and agile coach who helps teams move fast, learn faster, and scale when it matters. He writes about modern engineering, product thinking, and helping teams ship things that matter.
Through his consultancy, More Than Monkeys, Gary helps startups and scaleups improve how they build software — from tech strategy and agile delivery to product validation and team development.
Visit morethanmonkeys.co.uk to learn how we can help you build better, faster.
Follow Gary on LinkedIn for practical insights into engineering leadership, agile delivery, and team performance