Nearly 90% of development teams are now using AI every day. And the DORA 2025 report has documented something strange—teams using AI ship code faster, but their systems are becoming less stable. Deployment frequency climbs while change failure rates spike.
Your developers feel significantly more productive. But your DORA metrics are telling a different story. Review times balloon by 91% and bug counts rise 9%.
AI works exactly as designed—it makes writing code faster. The problem is everything around it. When you accelerate one part of your delivery pipeline without strengthening the rest, bottlenecks just pop up elsewhere. This is a defining challenge in the post-DevOps era, where automation outpaces organisational capability.
The DORA AI Capabilities Model lays out seven foundational capabilities that determine whether AI amplifies your organisation’s strengths or magnifies its weaknesses. Teams implementing all seven see measurable gains. Teams focusing only on code generation tools watch their gains evaporate into downstream chaos.
What is the AI paradox in software delivery?
Here’s the thing—AI adoption correlates with faster deployment frequency and reduced lead times, but at the same time it increases change failure rates and mean time to recovery. DORA reorganised their 2025 metrics to show this clearly: throughput metrics go up, instability metrics also go up.
Developers complete 21% more tasks and merge 98% more pull requests. But those same teams are experiencing longer review times, larger pull requests, and more defects. Code churn nearly doubles when teams lean heavily on AI-generated suggestions.
Around 30% of developers maintain little or no trust in AI output despite using it every single day. They’re using tools they don’t trust because the velocity gains feel real. Individual developers feel 80%+ more productive. But organisational metrics show the reality—instability is increasing.
The DORA report found zero evidence that the speed gains are worth the trade-off when instability increases without proper quality gates in place. Teams using AI for over a year report more consistent delivery, but newer adopters are hitting instability problems because their validation systems are lagging behind their automation speed.
The paradox exists because AI optimises local productivity without addressing system-level constraints. It’s a symptom of organisational maturity gaps.
Why does AI increase delivery speed but also increase instability?
AI speeds up code generation but testing, review, security scanning, and deployment processes don’t scale proportionally. It’s like speeding up one machine on an assembly line while leaving the others untouched—you end up with a pile-up at the next station.
You have a finite number of senior engineers. When developers touch 47% more pull requests per day, the review queue grows significantly. When using AI tools, almost 60% of deployments experience problems at least half the time.
The productivity gains come with a hidden cost—cognitive load doesn’t disappear, it just changes form. Developers trade writing boilerplate for validating AI outputs, refining prompts, and context-switching. Context switching is now expected as the developer role evolves to orchestration and oversight.
Every piece of AI-generated code carries a misprediction rate. When your software delivery pipeline isn’t strengthened, that error rate compounds.
Without lifecycle-wide modernisation, AI’s benefits are quickly neutralised. Some teams with strong platforms accept higher failure rates because they can recover quickly. But that’s a deliberate choice backed by solid recovery processes.
What is the DORA AI Capabilities Model?
The DORA AI Capabilities Model identifies seven foundational capabilities that amplify AI benefits and mitigate instability risks:
- Clear and communicated AI stance – clarity on permitted tools, usage expectations, data policies
- Healthy data ecosystems – quality, accessible, unified internal data
- AI-accessible internal data – context integration beyond generic assistance
- Strong version control practices – mature branching strategies, rollback capabilities
- Working in small batches – incremental changes rather than large releases
- User-centric focus – product strategy clarity
- Quality internal platforms – self-service platforms reducing cognitive load
These capabilities substantially amplify or unlock AI benefits. High performers implement all seven together. Low performers focus only on code generation tools.
AI success is fundamentally a systems problem requiring organisational transformation. Buying GitHub Copilot is easy. Building healthy data ecosystems and quality platforms requires organisational transformation.
Research shows these capabilities determine whether individual gains translate to organisational improvements. Without these foundations, downstream bottlenecks absorb individual productivity improvements.
How does AI act as an amplifier of organisational strengths and weaknesses?
AI amplifies existing organisational capabilities rather than creating new ones. Teams with strong testing cultures ship faster and more reliably with AI. Teams with weak testing ship faster but less reliably.
AI functions as both mirror and multiplier. If your organisation has healthy data ecosystems, AI leverages context to generate better code. If data is siloed, AI generates generic code requiring heavy rework. Psychological safety enables experimentation and learning from AI mistakes. Blame culture causes teams to hide AI usage.
Platform maturity shows the strongest correlation. Organisations with mature platforms see AI gains translate to organisational performance. Those without platforms see gains absorbed by toil. Self-service environments, standardised pipelines, and guardrails stabilise outcomes under automation load.
The greatest returns on AI investment come from concentrating on the underlying organisational system rather than tools. Without proper quality gates, the AI amplifier effect compounds organisational disadvantages.
What are the seven DORA AI capabilities that amplify AI benefits?
Let’s dig into each of the seven capabilities and how they enable safe AI adoption.
Clear AI Stance
You need organisational clarity on expectations, permitted tools, and policy applicability. Define tool permissions, data handling requirements, output validation expectations, and responsible AI guidelines.
Organisations moving from experimentation to operationalisation establish usage guidelines, provide role-specific training, build internal playbooks, and create communities of practice. Developers need clear permission to experiment without fear. Clear boundaries and expectations reduce the trust gap.
Healthy Data Ecosystems & AI-Accessible Internal Data
Quality, accessible, unified internal data forms the substrate AI needs. Generic AI tools produce generic outputs without organisational context. Connect AI tools to internal documentation, codebases, and decision logs, and output quality improves.
Healthy data ecosystems mean unified, documented, accessible data—the infrastructure layer. AI-accessible internal data means retrieval mechanisms that let AI actually use it—the access layer. When internal data is high-quality and accessible, AI provides contextual assistance rather than guessing.
Strong Version Control Practices
Mature development workflow and rollback capabilities matter more when AI accelerates code generation. Frequent commits amplify AI’s impact, while strong rollback improves team performance with AI-generated code volumes. Version control becomes the safety net for safe experimentation.
Working in Small Batches
Deploy frequently in small increments rather than large releases. This reduces blast radius when AI makes mistakes. AI consistently increases pull request size by 154%. Small batch discipline forces incremental changes that enable faster detection and recovery. Even when AI increases change failure rate, small batches limit impact.
User-Centric Focus
This is the only capability with a negative correlation when absent. Without user-centric focus, AI adoption can harm team performance. AI enables building the wrong features faster.
Teams need understanding of their end users and their feedback incorporated into product roadmaps. This prevents productivity theatre—generating code without delivering customer value.
Quality Internal Platforms
Platform engineering reduces developer cognitive overhead through self-service infrastructure. This is where AI governance lives.
Quality platforms provide self-service environments, standardised pipelines, and guardrails. Golden paths give AI-generated code a structured path through testing, security scanning, and deployment. Platform teams pave paths of least resistance and connect the toolchain.
When AI scales code generation, platforms scale the quality gates. Organisations investing in platform maturity report quieter incident queues. This is why platform engineering at SMB scale provides the foundation for safe AI adoption through systematic guardrails.
What are the seven team archetypes in the DORA 2025 report?
Understanding these capabilities matters because different teams need different approaches. DORA identified seven distinct team archetypes, each facing unique AI adoption challenges. These archetypes connect closely to organisational structure and cognitive load patterns, determining how effectively teams can adopt AI.
DORA replaced traditional performance tiers with seven team archetypes based on throughput, instability, and well-being.
Harmonious High-Achievers (20%): Excel across all dimensions. Use AI with strong quality gates and see measurable gains.
Pragmatic Performers (20%): Deliver speed and stability but haven’t reached peak engagement. AI adoption focuses on reducing friction.
Stable and Methodical (15%): High-quality work at sustainable pace. AI helps increase velocity without sacrificing quality.
High Impact, Low Cadence (7%): High-impact work but low throughput and high instability. AI risks making instability worse without testing infrastructure.
Constrained by Process (17%): Inefficient processes consume effort. AI gains evaporate into process overhead.
Legacy Bottleneck (11%): Unstable systems dictate work. AI without platform investment makes things worse.
Foundational Challenges (10%): Survival mode with significant gaps. Need basic capabilities before AI makes sense.
The top two profiles comprise 40%. The archetypes help diagnose which capabilities to invest in based on your current performance profile.
Why doesn’t individual AI productivity translate to organisational improvements?
The team archetypes help diagnose organisational maturity, but they also reveal why individual productivity gains often vanish. Even high-performing teams struggle to translate personal productivity into organisational outcomes.
The controlled MIT study found developers took 19% longer with AI assistance but felt faster despite taking longer. Organisational metrics remained flat even as individual developers reported significant productivity gains.
Downstream constraints become bottlenecks that absorb gains. Many teams still deployed on fixed schedules because downstream processes hadn’t changed.
The stubborn results exist outside developer control. You can write code twice as fast but can’t make the weekly deployment schedule happen twice as fast. Company-wide DORA metrics stayed flat.
Without Value Stream Management, teams optimise locally while constraints shift to review and deployment stages.
How does Value Stream Management act as AI governance?
The solution to this translation problem lies in Value Stream Management—a practice that reveals where productivity gains disappear.
Value Stream Management provides the systems-level view to ensure AI gets applied to actual constraints. VSM measures end-to-end flow from idea to customer value, preventing productivity theatre. Without VSM, AI creates localised pockets of productivity lost to downstream chaos.
If testing or deployment can’t handle increased volume, the overall system gains nothing. VSM reveals where gains evaporate.
Organisations with mature VSM practices see amplified benefits. Teams with mature measurement practices successfully translate AI gains to team and product performance.
VSM identifies true constraints. If code review is the bottleneck, maybe AI helps review code rather than generating more.
Golden paths in platforms provide VSM instrumentation points. When AI-generated code flows through standardised pipelines, you measure cycle time and identify bottlenecks. Value Stream Management as a platform capability creates governance that lets AI scale safely through systematic measurement and constraint identification.
What should you do about the AI paradox?
AI adoption is no longer optional—with 90% of teams already using AI daily, non-adoption puts you at a competitive disadvantage. But you need to invest equally in capabilities that prevent instability.
Implement all seven DORA AI capabilities together. Clear AI stance gives developers permission to experiment. Healthy data ecosystems make AI useful rather than generic. Version control and small batches provide safety nets. User-centric focus prevents velocity in the wrong direction. Platforms provide guardrails that let AI scale.
Build Value Stream Management to diagnose where gains evaporate. Measure end-to-end flow, identify constraints, direct AI investment toward bottlenecks.
Invest in platform engineering. Platform maturity correlates strongly with successful AI adoption. Platforms provide self-service capabilities, reduce cognitive load, and enforce standards. Golden paths guide AI-generated code through automated testing, security scanning, and deployment.
Recognise your team archetype. Harmonious High-Achievers can adopt aggressively. Foundational Challenges teams need basic capabilities first. Team structure affects AI effectiveness, and understanding your archetype helps target investment where it matters most.
The goal is using AI safely and effectively. Build foundations that amplify benefits and mitigate risks. Measure what matters. Invest in capabilities, not just tools.
The teams getting organisational gains from AI now invested in platforms, measurement, and quality gates before ramping up AI adoption. The broader DevOps transformation challenges provide crucial context—AI adoption is another layer requiring systematic organisational capability. The AI paradox resolves when you treat it as a systems problem. Speed and stability aren’t mutually exclusive, but you need mature capabilities to get both.