‍Atlassian's $1B Buy: Measure Outcomes, Not Activity

October 7, 2025
13 mins
Share this post
‍Atlassian's $1B Buy: Measure Outcomes, Not Activity

Introduction

Atlassian just made its largest acquisition ever: $1 billion for DX, a five-year-old developer productivity platform that helps enterprises understand how their engineering teams actually work. Not how they should work according to some methodology. Not what their sprint velocity claims. How they actually work, including bottlenecks and blockers.

For CTOs and engineering leaders, this acquisition isn't just another tech M&A headline. It's a clear signal that the era of building software in the dark is over. When a company that already owns Jira, Bitbucket, and Confluence, tools that generate mountains of workflow data, still pays $1 billion for deeper productivity insights, they're telling you something critical: the data you're already collecting isn't enough to understand whether your teams are effective.

Here's what makes this particularly relevant right now: DX came out of stealth just three years ago and tripled its customer base annually while raising less than $5 million in funding. Atlassian tried building their own developer productivity insight tool for three years before giving up and acquiring DX instead. Their CEO, Mike Cannon-Brookes, was blunt about why: "Using AI is easy, creating value is harder."

This guide examines why engineering analytics have become a $1 billion priority, what separates effective measurement from surveillance theater, and how modern software teams can gain the visibility they need without building (or buying) an entirely new platform.

The Developer Productivity Problem Atlassian Couldn't Solve Alone

Atlassian isn't a scrappy startup gambling on an unproven market. They serve over 300,000 customers globally. Their tools sit at the center of how engineering teams plan, track, and ship software. They have access to more workflow data than almost any company on Earth.

Yet after three years of trying to build an in-house developer productivity insight tool, they acquired DX instead. Why couldn't they solve this internally?

The Data Exists, But Context Doesn't

Project management tools like Jira track what's happening: tickets opened, closed, reassigned. Version control systems like Bitbucket track code changes: commits, pull requests, and merge conflicts. CI/CD pipelines track deployments: build times, test failures, and release frequency.

But none of these systems, individually or combined, answer the questions engineering leaders actually need answered:

  • Are we building the right things, or just building things fast?
  • Where are teams getting stuck, and why?
  • Which investments in tools, training, or process changes actually improve outcomes?
  • How do we know if our AI coding assistants are accelerating delivery or introducing technical debt?
Important: Most engineering analytics fail because they measure activity instead of outcomes. Commits per day, lines of code, and ticket velocity are easy to track but rarely correlate with business value delivered.

DX founder Abi Noda experienced this firsthand as a product manager at GitHub. The metrics he had access to weren't giving him the full picture of what was slowing his teams down. "The assumptions we had about what we needed to help ship products faster were quite different than what the teams and developers were saying was getting in their way," Noda explained when DX emerged from stealth in 2022. "Even teams didn't always have awareness about their own issues and leadership."

The Surveillance Problem

Traditional productivity measurement creates an adversarial relationship between leadership and engineering teams. Developers feel watched. Managers feel blind. Everyone optimizes for metrics that don't actually matter.

DX positioned itself differently from the start: they wanted to build something that didn't make developers feel like they were being surveilled. This isn't just good ethics, it's good business. When developers trust the measurement system, the data becomes more accurate. When they don't, they game the system, and leadership ends up making decisions based on fiction.

Atlassian recognized this cultural fit mattered. Cannon-Brookes noted that 90% of DX's customers were already using Atlassian products, which meant DX had figured out how to layer productivity insights on top of existing workflows without disrupting them.

The AI Measurement Gap

The timing of this acquisition matters. AI coding assistants like GitHub Copilot, Cursor, and Claude have exploded in adoption over the past 18 months. Engineering teams are using these tools extensively, but most organizations have no systematic way to measure whether they're actually improving productivity or just changing how work gets done.

"You suddenly have these budgets that are going up. Is that a good thing?" Cannon-Brookes said. "Is that not a good thing? Am I spending the money in the right ways? It's really, really important and critical."

Without measurement infrastructure in place, companies are flying blind on one of the most significant shifts in software development since the move to cloud infrastructure. DX gives Atlassian the ability to answer these questions at scale, across their entire customer base.

What DX Actually Does (And Why It's Worth $1 Billion)

DX didn't become a $1 billion acquisition by building yet another dashboard of vanity metrics. They built something fundamentally different: a research-driven approach to understanding developer productivity that combines qualitative and quantitative data.

The Research Foundation

DX was founded on the belief that measuring developer productivity was "an unsolved problem that requires a research-driven approach," according to Noda. That research orientation is apparent in how their platform works.

Instead of just tracking what developers do, DX measures:

Developer experience: How do engineers feel about their tools, processes, and ability to ship work? This isn't touchy-feely sentiment analysis; it's structured feedback that correlates with team performance.

Flow state and interruptions: How often are developers able to work in extended, focused sessions versus constantly context-switching between meetings, alerts, and unplanned work?

Bottleneck identification: Where do handoffs break down? Where do reviews stall? Where do deployment processes introduce unnecessary friction?

Comparative benchmarking: How does your team's performance compare to others in your industry, at your company size, or at similar technical maturity levels?

This combination of qualitative and quantitative analysis is what separates effective engineering analytics from spreadsheet theater. You can't understand why teams are struggling by looking at velocity charts alone. You need to understand what developers themselves are experiencing.

The Feedback Flywheel

DX's real value isn't just measurement; it's the closed-loop system it enables. According to Noda, the platform provides "that full flywheel to get the data and understand where we are unhealthy. They can plug in Atlassian's tools and solutions to go address those bottlenecks."

Here's what that looks like in practice:

  1. Identify the bottleneck: DX surfaces that code reviews are taking 3x longer than industry benchmarks, and developer survey data shows frustration with unclear review expectations.
  2. Understand the root cause: Drill into the data to discover that reviews are stalling on specific types of changes (architectural decisions, security-sensitive code) while routine updates move quickly.
  3. Implement targeted solutions: Rather than "improve code review velocity" (too vague), you create clear guidelines for architectural review processes, assign dedicated reviewers for security code, and set up automated approvals for routine changes.
  4. Measure the impact: Track whether review times improve, whether developers report better experiences, and whether code quality metrics remain stable or improve.

Without this closed-loop approach, measurement becomes an end unto itself. With it, engineering analytics drive continuous improvement.

Pro tip: The most valuable engineering analytics aren't the ones that make leadership feel informed—they're the ones that help teams remove friction from their own workflows. If your developers aren't using the data, neither should you.

The 350-Customer Validation

DX works with more than 350 enterprise customers, including ADP, Adyen, and, notably, GitHub itself, the company where Noda first recognized the need for better productivity measurement. This customer base validates two things:

First, the problem is real and widespread. These aren't early adopters gambling on unproven technology. They're established enterprises willing to invest in measurement infrastructure because they've experienced the cost of building software without visibility.

Second, the solution works. DX tripled its customer base year-over-year while raising less than $5 million in total funding. That kind of growth-without-burn happens when your product creates genuine value that customers will pay for, not when you're subsidizing adoption with venture capital.

Why Engineering Analytics Matter More Than Ever

Three converging trends have made engineering analytics a strategic necessity rather than a nice-to-have:

1. AI Adoption Is Accelerating Faster Than Measurement

GitHub reported that 92% of developers are using AI coding tools in their work. Your teams are almost certainly using these tools, whether you've officially approved them or not. But without systematic measurement, you have no way to answer fundamental questions:

  • Are AI tools reducing the time from concept to deployment?
  • Are they improving code quality, or introducing subtle bugs that surface later?
  • Which types of work benefit most from AI assistance, and which don't?
  • How should you adjust hiring, training, and tool budgets based on actual productivity gains?

The risk isn't just wasted AI spending; it's making strategic decisions based on assumptions rather than evidence. If you believe AI is dramatically accelerating your team's productivity but it's actually just shifting where time gets spent, you'll make incorrect decisions about team size, scope, and timelines.

2. Remote and Distributed Teams Need New Visibility Models

When everyone worked in the same office, you could get a sense of team health through informal observation. You'd notice when someone was stuck on a problem for days, when meetings were eating everyone's calendar, or when a particular process was frustrating the team.

Remote and distributed teams break these informal feedback loops. You need systematic ways to understand:

  • Whether async communication is working or creating hidden delays
  • If timezone differences are creating handoff bottlenecks
  • How team cohesion and collaboration are evolving over time
  • Where documentation gaps are forcing people to hunt down information
Good to know: The most effective remote engineering teams use lightweight, continuous feedback mechanisms rather than heavy quarterly surveys. Daily pulse checks and integrated workflow analytics provide better signal than annual engagement surveys.

3. Engineering Efficiency Directly Impacts Business Outcomes

The relationship between engineering productivity and business performance has never been tighter. For software-driven businesses, the ability to ship features quickly, respond to market changes, and maintain system reliability directly determines competitive positioning.

This creates pressure from both directions. Business leaders want engineering to move faster. Engineering leaders want to protect quality and prevent burnout. Engineering analytics provide the common language both sides need to have productive conversations about tradeoffs:

  • "We can ship this feature 30% faster if we accept this technical debt; here's what the payback timeline looks like."
  • "Our deployment frequency has increased 2x, but our post-deployment hotfix rate hasn't changed; we're actually getting faster and more stable."
  • "Team velocity looks flat over the past quarter, but complexity per feature has increased significantly; we're tackling harder problems."

Without data, these conversations devolve into feelings and politics. With data, they become strategic discussions about resource allocation and risk management.

The Hidden Cost of Building Without Visibility

Atlassian's three-year failed attempt to build their own productivity insight tool reveals an uncomfortable truth: even companies with deep engineering expertise and massive workflow datasets struggle to solve the measurement problem.

What Most Organizations Try (And Why It Fails)

Approach 1: Sprint velocity and story points

Teams track velocity: how many story points they complete per sprint. Leadership uses this to predict delivery timelines and compare team productivity.

Why it fails: Story points are relative, not absolute. Ten points on one team doesn't equal ten points on another team. Velocity measures output, not outcomes. Teams optimize for velocity instead of value delivered. And most critically, velocity tells you nothing about whether you're building the right things.

Approach 2: Code metrics (commits, PRs, lines changed)

Teams track activity: how many commits per developer, how many pull requests opened, how many lines of code changed.

Why it fails: These metrics are trivially easy to game and rarely correlate with meaningful outcomes. More commits can mean more progress or more thrashing. More pull requests can mean better code review discipline or over-fragmented work. More lines changed can mean significant new features or inefficient refactoring.

Approach 3: Quarterly engineering surveys

Leadership runs periodic surveys asking developers about their satisfaction, blockers, and suggestions for improvement.

Why it fails: Quarterly feedback is too infrequent to be actionable. Response rates drop over time. Survey fatigue sets in. And most critically, surveys reveal problems but don't quantify their impact, you learn that code reviews are frustrating, but not whether fixing them would be your highest-value intervention.

The Opportunity Cost

While you're debating which metrics to track and building custom dashboards, your competitors are systematically identifying and removing friction from their development processes. The cost isn't just the time and money spent on measurement infrastructure that doesn't work; it's the compounding effect of building software inefficiently while others optimize.

Consider this scenario: Two companies start with equivalent engineering teams and technical capabilities. One implements effective engineering analytics and uses that visibility to remove one major bottleneck per quarter. The other operates without systematic measurement, making improvements based on gut feel and anecdotal feedback.

After two years, the first company has removed eight significant sources of friction from their development process. Their teams ship faster, with higher quality, and with better morale. The second company has made improvements too, but without measurement to guide prioritization, they've spent effort fixing things that didn't matter while missing problems that did.

The gap compounds. The first company can respond to market opportunities faster, iterate on products more aggressively, and attract and retain engineering talent more effectively. The second company wonders why their productivity keeps slipping despite adding headcount.

Warning: The most expensive engineering decisions are the ones you don't realize you're making. Without visibility into how your teams actually work, you're making implicit choices about where effort goes, and you won't know whether those choices were right until months or years later.

What Effective Engineering Measurement Looks Like

Based on DX's research-driven approach and Atlassian's validation of its value, here's what separates measurement that drives improvement from measurement theater:

Principle 1: Measure Outcomes, Not Activity

Effective measurement focuses on whether teams are successfully delivering value, not whether they're busy. This means tracking things like:

  • Deployment frequency: How often are you shipping changes to production?
  • Lead time for changes: How long does it take to go from code commit to production deployment?
  • Change failure rate: What percentage of production changes require remediation?
  • Time to restore service: How quickly can you recover when things break?

These metrics, popularized by the DORA research program, correlate with business outcomes. High-performing engineering organizations consistently demonstrate higher deployment frequency, shorter lead times, lower change failure rates, and faster recovery times than their lower-performing peers.

But metrics alone aren't enough. You also need to understand the developer experience factors that enable (or prevent) good performance.

Principle 2: Combine Quantitative Data with Qualitative Context

Numbers tell you what's happening. Developer feedback tells you why. The most effective measurement systems combine both.

Quantitative signals:

  • Code review turnaround times
  • Build and test execution duration
  • Deployment pipeline reliability
  • Incident response and recovery metrics

Qualitative signals:

  • Developer satisfaction with tools and processes
  • Perceived blockers to productivity
  • Team psychological safety and collaboration quality
  • Clarity of technical direction and priorities

When you see code review times increasing, the quantitative data tells you there's a problem. The qualitative data tells you whether it's because reviewers are overloaded, expectations are unclear, changes are too large, or something else entirely. Without both signals, you're guessing at solutions.

Principle 3: Make Data Accessible to Those Who Can Act on It

Engineering analytics should primarily serve the teams doing the work, not just leadership trying to track progress. When developers can see which processes are creating friction and have the agency to improve them, measurement drives cultural change.

This means:

  • Developers can see their team's metrics and compare them to benchmarks
  • Teams can run experiments and measure the impact of process changes
  • Data access doesn't require going through management or data teams
  • Insights surface in the tools developers already use, not separate dashboards they have to remember to check
Pro tip: If your engineering analytics are only visible to VPs and above, you're using measurement to create accountability rather than enable improvement. The teams closest to the work should be the primary consumers of productivity data.

Principle 4: Benchmark Against Meaningful Comparisons

Absolute metrics rarely provide useful context. Is a two-day code review turnaround time good or bad? It depends on what you're building, how your team is structured, and what your industry norms are.

Effective measurement systems provide comparative context:

  • How does your performance compare to similar companies in your industry?
  • How does your performance compare to other teams within your organization?
  • How has your performance changed over time as you've made process improvements?

DX built this comparative benchmarking into their platform from the start. You can see how your code review times compare to others in your industry, at your company size, or with similar technical maturity. This context transforms raw data into actionable insights.

The Platform-First Approach to Engineering Visibility

Atlassian's acquisition of DX doesn't just add a new product to their portfolio; it enables them to build engineering analytics into their entire system of work. This platform-first approach matters for two reasons.

Integration Eliminates Manual Data Gathering

When measurement tools integrate directly with your existing workflow systems, data collection happens automatically. Developers don't fill out additional forms or context-switch to separate tools. The platform captures workflow data as a natural byproduct of how teams already work.

This integration has second-order effects: more complete data, less overhead, and most importantly, adoption that doesn't depend on changing developer behavior. If your measurement system requires developers to do extra work, they won't use it consistently, and your data will be incomplete and biased toward the teams that already care about measurement.

Platforms Enable Closed-Loop Improvement

The real power of platform-integrated analytics comes from closing the loop: identifying bottlenecks, implementing solutions using platform tools, and measuring whether those solutions actually worked.

This is what DX founder Noda meant by the "full flywheel": DX identifies that your deployment pipeline is your primary bottleneck. Atlassian's platform provides Bitbucket Pipelines to optimize your CI/CD. DX measures whether deployment frequency actually improved and whether developers report better experiences.

Without platform integration, there's a gap between insight and action. With it, the platform becomes a system for continuous improvement, not just continuous measurement.

When to Consider a Software Engineering Orchestration Platform

For organizations building software at scale, engineering analytics are just one component of a larger challenge: orchestrating complex development efforts across distributed teams, changing requirements, and evolving technical landscapes.

Modern software engineering orchestration platforms combine several capabilities that traditional development tools lack:

Real-time visibility across the full development lifecycle: From planning through deployment, you can see where work is flowing smoothly and where it's getting stuck. This visibility extends beyond individual tools to show how your entire engineering system is performing.

AI-powered oversight and quality assurance: Rather than relying purely on code review and testing to catch issues, AI agents can provide continuous monitoring of code quality, security vulnerabilities, and architectural decisions. This doesn't replace human judgment; it augments it by surfacing problems earlier and more systematically.

Flexible team composition and scaling: As your roadmap evolves, you need the ability to adjust team composition quickly, adding specialized expertise for new initiatives, scaling capacity during critical delivery periods, or reallocating resources as priorities shift.

Engineering analytics that drive decisions: Beyond measuring activity, you need insights that inform resource allocation, technical investment, and process improvement. The best platforms surface patterns across teams and projects that individual dashboards miss.

This platform-first approach particularly suits organizations pursuing custom software development where requirements evolve rapidly, technical complexity is high, and the cost of poor visibility compounds quickly.

For businesses building ecommerce website development services, time-to-market often determines success or failure. Platform-integrated analytics help you understand whether your development process is accelerating or whether hidden bottlenecks are quietly extending timelines.

How Subscription-Based Development Models Change the Equation

Traditional software development partnerships force a choice between flexibility and commitment. Fixed-price projects provide cost certainty but struggle with changing requirements. Time-and-materials engagements offer flexibility but create budget unpredictability.

Subscription-based access to engineering teams enables a different model, one where visibility and measurement are built into the engagement from day one:

Transparent capacity allocation: You can see exactly how engineering time is being spent across your initiatives, making it possible to reallocate resources as priorities shift without renegotiating contracts or changing vendors.

Continuous performance measurement: Rather than waiting for project completion to evaluate outcomes, subscription models enable ongoing assessment of velocity, quality, and team effectiveness. If something isn't working, you adjust immediately rather than months later.

On-demand expertise scaling: When you identify bottlenecks through engineering analytics, you can bring in specialized expertise to address them, adding AI capabilities, cloud architecture knowledge, or specific technical skills without the overhead of traditional hiring.

Aligned incentives around outcomes: When you're not locked into a specific scope or timeline, the development team's incentive shifts from delivering what was specified to delivering what actually creates value. This alignment makes engineering analytics more useful because everyone is optimizing for the same outcomes.

Modern software development companies increasingly combine platform-first engineering orchestration with subscription-based access to technical talent. Rather than choosing between build-it-yourself and traditional outsourcing, you can access a complete software engineering system that includes the visibility, measurement, and continuous improvement capabilities that Atlassian just paid $1 billion to acquire.

The Strategic Implications for Engineering Leaders

Atlassian's acquisition of DX represents more than a product portfolio expansion. It's a statement about what engineering excellence requires in 2025 and beyond: systematic visibility into how teams work, data-driven continuous improvement, and the ability to measure whether investments in tools, process, and people are actually paying off.

Engineering Analytics Are Infrastructure, Not Instrumentation

The most important lesson from Atlassian's three-year failed attempt to build their own productivity insights: engineering analytics are infrastructure-level problems, not application-level features you can bolt on afterward.

Just as you wouldn't build your own cloud platform when AWS, Azure, and GCP exist, you shouldn't try to build comprehensive engineering analytics from scratch when purpose-built platforms already solve this problem. The complexity isn't just in collecting data; it's in making sense of that data, providing meaningful benchmarks, and enabling teams to act on insights.

Measurement Without Action Is Just Surveillance

The distinction between effective engineering analytics and surveillance theater comes down to one question: Does this measurement help teams improve, or does it just help management watch?

If developers don't have access to the data and agency to act on it, you've built a monitoring system, not an improvement system. If the metrics you track are easily gamed and don't correlate with business outcomes, you've created incentives for optimization theater rather than real productivity gains.

The companies that get this right, and DX's customer growth suggests they're helping companies get it right, use measurement as a tool for empowerment, not control. Teams can see where they're effective, understand where they're struggling, and have the context and support to make improvements.

The AI Era Demands Better Measurement

As AI coding assistants become ubiquitous in software development, the need for systematic productivity measurement will only intensify. Companies making significant investments in AI tools need to know whether those investments are paying off. Teams adopting AI assistance need to understand which types of work benefit most and where human judgment remains critical.

Without robust engineering analytics, the AI era in software development will be characterized by anecdotal evidence, survivorship bias, and decision-making based on vendor claims rather than actual outcomes. Companies that build or acquire strong measurement capabilities now will have a significant advantage in understanding how to leverage AI effectively.

Conclusion

When Atlassian pays $1 billion for a five-year-old startup that helps enterprises measure developer productivity, they're telling us something important about the future of software development: building without visibility isn't sustainable anymore.

The companies that will thrive aren't the ones writing the most code or deploying most frequently. They're the ones that understand how their engineering systems actually work, where friction exists, what improvements matter most, and whether their investments in tools, process, and people are creating genuine productivity gains or just activity theater.

Engineering analytics aren't a luxury for organizations with unlimited resources and time to build custom measurement systems. They're fundamental infrastructure for any business that depends on software delivery speed and quality for competitive advantage. The question isn't whether you need visibility into how your teams work; it's whether you're building it yourself, buying it, or operating without it.

Ready to understand how your engineering teams actually perform? Explore how Scrums.com's Software Engineering Orchestration Platform provides real-time visibility, AI-powered oversight, and the analytics that drive continuous improvement, without requiring you to build, buy, or integrate disparate measurement systems.

External Resources

Eliminate Delivery Risks with Real-Time Engineering Metrics

Our Software Engineering Orchestration Platform (SEOP) powers speed, flexibility, and real-time metrics.

As Seen On Over 400 News Platforms