Scrum Metrics: What Engineering Leaders Should Track

Megan Harper
Megan Harper
February 24, 2026
10 mins
Scrum Metrics: What Engineering Leaders Should Track

Most engineering teams run Scrum. Far fewer can tell you whether it's working.

Sprint reviews happen. Retrospectives get scheduled. Velocity goes on a dashboard. But when you ask "are we getting faster?" or "where's the bottleneck?", the answers are usually vague, or worse, confidently wrong.

The problem isn't that scrum teams don't measure anything. It's that they measure the wrong things, or they measure the right things and use them in ways that break them.

According to the Scrum Alliance State of Agile report, only 52% of scrum teams consistently meet their sprint goals. That's not a scrum problem. It's a metrics problem: teams that don't know which signals to watch can't course-correct before a sprint goes sideways.

This guide covers the eight scrum metrics that actually tell you something: what they measure, what benchmarks exist, and how to use them without turning your engineers into number-gamers.

What Are Scrum Metrics?

Scrum metrics are quantitative measures used to evaluate the delivery performance, predictability, and team health of a scrum team. They fall into two categories: delivery metrics, which measure output and pace, and health metrics, which measure whether the team can sustain that pace over time.

The distinction matters. A team that ships every sprint but is burning out is not performing well. A team with perfect morale that misses every sprint goal isn't either. Both categories are required to get an accurate picture.

Scrum metrics operate at the team and sprint level. They're distinct from DORA metrics, which measure software delivery performance at the organizational and pipeline level. The two sets are complementary, not competing, and the relationship between them often surfaces the most actionable improvement opportunities. More on that below.

The 8 Scrum Metrics Every Engineering Leader Should Track

1. Velocity

What it measures: The average amount of work a team completes per sprint, expressed in story points or task count.

Velocity is the most commonly tracked scrum metric, and the most commonly misread. It's a planning tool, not a performance benchmark. A team with velocity 40 is not twice as productive as a team with velocity 20. Story points are relative to each team's own estimation system.

How to use it: Track rolling average velocity over 6-10 sprints. Use it to forecast how much can realistically fit in a sprint and how long a roadmap will take to deliver. Don't use it to compare teams or set improvement targets.

The red flag: Velocity that only trends upward. Story points inflate when teams feel pressure to show improvement. If velocity rises while sprint goal hit rate stays flat or quality drops, the numbers are being gamed, not the performance.

2. Sprint Goal Hit Rate

What it measures: The percentage of sprints where the team meets the sprint goal as defined at the start of the sprint.

Sprint goal hit rate is the single best indicator of team predictability. A low rate tells you something is breaking in planning, scope management, or execution. A rate that's consistently 100% over many months may indicate the team is sandbagging commitments to look good.

According to the Scrum Alliance State of Agile report, only 52% of scrum teams consistently meet their sprint goals. For most organizations, that gap between commitment and delivery is a direct business planning problem: it cascades into missed roadmap dates, frustrated stakeholders, and eroded trust in the engineering team.

Benchmark: 70-85% is a healthy range for most teams.

3. Sprint Burndown

What it measures: How work progresses through the sprint, specifically whether the team is on track to complete committed work by sprint end.

A healthy burndown slopes steadily from top-left to bottom-right. A flat line mid-sprint means work is blocked. A sudden vertical drop at the end of the sprint means work wasn't tracked properly throughout: items were marked complete in bulk right before the review.

How to use it: Review mid-sprint, not only at retrospective. A burndown that's consistently flat for the first half of every sprint points to slow handoffs at sprint start, unclear acceptance criteria, or blocked dependencies that aren't surfacing in standups.

4. Cycle Time

What it measures: How long it takes a work item to move from "in progress" to "done": the time from when development begins to when it ships.

Cycle time is where scrum metrics and DORA metrics intersect. Teams often discover that actual development time is a fraction of total cycle time, with the majority of time spent in review queues, QA handoffs, or waiting for deploy windows. That's where the improvement opportunities live.

Per the DORA State of DevOps Report, elite teams have cycle times under one day. High performers complete within a week. Most teams without visibility into their cycle time have no idea where they land.

How to use it: Track per work item type (features vs. bugs vs. tech debt), since cycle times often differ significantly by category, and the breakdown tells you more than the average.

5. Lead Time

What it measures: The total time from when a work item is requested (or added to the backlog) to when it's delivered, including all waiting time before work begins.

Lead time is the metric your product managers and stakeholders experience. "How long will this take?" is a lead time question, not a cycle time question. If lead time is consistently much longer than cycle time, the bottleneck is the queue: too many items waiting to start, not slow execution once work begins.

How to use it: Track alongside cycle time. A large gap between the two points to a backlog management or prioritization problem, not an execution problem.

6. Escaped Defects

What it measures: Bugs found in production that should have been caught before release.

Escaped defects in a scrum context usually point to sprint pressure overriding quality gates. Teams that consistently skip testing at the end of a sprint to hit velocity targets export that cost to post-release work, which is always more expensive than catching issues before shipping.

How to use it: Track per sprint and watch the trend. An increase in escaped defects alongside rising velocity is a strong signal: the team is moving faster at the cost of quality. That's not sustainable, and the total time is longer once rework is included.

7. Team Health Score

What it measures: Team morale, psychological safety, and sustainability, typically captured via end-of-sprint pulse surveys.

The DigitalAI State of Agile report consistently shows that team health is among the strongest predictors of long-term delivery performance. A team with declining morale will show degrading delivery metrics 3-6 months before that degradation is visible in velocity or defect data. Team health is a leading indicator; most other scrum metrics are lagging.

How to use it: Run short pulse surveys (3-5 questions) at the end of each sprint. Track trends, not individual data points. Low scores on items like "our workload is sustainable" or "I have what I need to do my job" are early warning signals.

8. Sprint Scope Creep

What it measures: The percentage of sprint work added after the sprint is committed: items injected mid-sprint that weren't in the original plan.

Some scope creep is unavoidable. Chronic scope creep above 15-20% of sprint work prevents teams from building predictability and erodes engineering trust in the planning process. It also masks true velocity, since the team is constantly working on different things than what was committed.

How to use it: Track per sprint and distinguish between team-initiated changes (the team adds items they underestimated) and stakeholder-injected changes (items added from outside the team). The latter is usually a governance problem that sits above the team.

How to Use Scrum Metrics Without Gaming the Numbers

There's a principle in management known as Goodhart's Law: when a measure becomes a target, it ceases to be a good measure. Scrum metrics are particularly vulnerable to this.

Tie velocity to performance reviews and velocity will rise through story point inflation, not faster delivery. Set sprint hit rate targets and teams will start sandbagging commitments to guarantee they meet them. Track escaped defects as a KPI and teams will dispute which issues qualify.

The solution isn't to stop measuring. It's to measure with the right framing.

Measure direction, not absolute values. Is velocity stable? Is cycle time improving quarter-over-quarter? Is the defect rate trending down? Movement matters more than hitting a specific number.

Separate team metrics from individual metrics. Scrum metrics are team-level indicators. Using them to evaluate individual engineers creates exactly the gaming behavior you're trying to prevent, and it damages the psychological safety required for honest retrospectives.

Treat anomalies as questions, not verdicts. A bad sprint doesn't mean the team is failing. A suddenly great sprint might mean they sandbagged. Ask before drawing conclusions.

Pair leading and lagging indicators. Velocity, cycle time, and escaped defects tell you what already happened. Team health scores and sprint scope creep tell you what's coming. Both are required.

Scrum Metrics vs DORA Metrics: What's the Difference?

Category Scrum Metrics DORA Metrics
Level Team and sprint Organization and pipeline
Cadence Sprint-by-sprint Continuous / release-level
Focus Planning quality, execution consistency Delivery speed and stability
Core Measures Velocity, hit rate, cycle time, team health Deployment frequency, lead time for changes, change failure rate, MTTR
Primary Audience Engineering teams, scrum leads CTOs, VPs of Engineering
When to Review Sprint retrospective (every 1-2 weeks) Monthly and quarterly

The two frameworks are complementary. A team can have excellent sprint metrics (high hit rate, stable velocity) but poor DORA metrics. That typically means they're completing sprint work but it's sitting in a release queue rather than shipping to production. The combination of both reveals where the actual bottlenecks are.

For teams running an engineering operations program, both sets of metrics feed into the same reporting layer. The relationship between scrum-level predictability and DORA-level deployment speed often surfaces the highest-value improvements.

For a deeper look at DORA metrics: DORA Metrics: The Definitive Guide for Engineering Leaders.

Tracking Scrum Metrics Across Multiple Teams

Individual teams can track most of these in Jira, Linear, or a spreadsheet. The challenge appears when you're responsible for five or fifteen teams.

At that scale, the questions change: Are teams using consistent definitions? How do you compare trends without making the comparison punitive? When one team's metrics start diverging from their own baseline, how quickly do you find out?

Scrums.com is an engineering intelligence platform that surfaces scrum and DORA metrics across all your engineering teams in a single dashboard, with integrations into the tools your teams already use. No new process, no new workflow required.

See how it works on the platform

FAQ

What is the most important scrum metric?

Sprint goal hit rate. It's the single best indicator of whether a team is planning realistically and delivering on what they commit to. Velocity tells you output volume. Hit rate tells you whether the team can commit and follow through: the foundation of reliable engineering delivery.

How often should scrum metrics be reviewed?

Velocity, burndown, and cycle time at every sprint retrospective (every 1-2 weeks). Team health scores tracked each sprint. Escaped defect rate and lead time are best reviewed monthly to identify trends rather than reacting to individual sprint variance.

Should scrum metrics be used to evaluate individual engineers?

No. Scrum metrics are team-level indicators. Using them to evaluate individuals creates the gaming behavior they're designed to detect, and undermines the psychological safety required for honest retrospectives.

What's the difference between velocity and throughput?

Velocity measures story points completed per sprint: relative effort. Throughput measures the number of work items completed per sprint regardless of size. Throughput is often more useful for trend tracking because it doesn't require consistent story point estimation, which is difficult to maintain over time.

How do scrum metrics relate to DORA metrics?

DORA metrics measure the speed and stability of your software delivery pipeline at the organizational level. Scrum metrics measure team planning quality and sprint execution at the team level. They operate at different levels and both are needed for a complete picture of engineering performance. See the comparison table above.

Eliminate Delivery Risks with Real-Time Engineering Metrics

Our Software Engineering Orchestration Platform (SEOP) powers speed, flexibility, and real-time metrics.

As Seen On Over 400 News Platforms