Agile is how software is built. Its conceptualisation, its practices, its strategies have permeated software development, even in teams who are not Agile practitioners.
The determination of Agile was to keep the developers of software aligned with the users of the software. Alignment was maintained through feedback loops, sync points with the stakeholders, and the feedback loops were short: software was built incrementally and iteratively, and those increments were kept small so iteration could happen quickly and developers and stakeholders could never drift too far out of sync. This is what the Agile product stories and sprints grew out of, and technical practices like CI/CD developed to support them.
Now AI has broken Agile. Coding assistants and agents have changed the flow of software development: where time is spent, where costs are generated, where the sync points are.
We’re going to take a quick look at what AI is doing to Agile and what can be done to get the best of both tools. Rather than swapping between AI assistant, AI agent, coding agent, etc, we’re just going to call it AI.
How AI breaks Agile
Getting humans to iterate on code is expensive, so you want to get it right. You want your developers building the right pieces. This is why user stories, sprint planning, standups, ticket grooming and so on exist. It’s all done to reduce risk; the risk that your developers just spent weeks on the wrong code.
Getting AI to iterate on code is cheap and fast compared to getting humans to do it. It becomes the cheapest and fastest part of the process. A two week sprint can be completed in an hour or two. This changes project cadence, and project scheduling, and messes with the stakeholder sync points.
Do you have meetings every few hours to discuss the new feature implementation? Do you discuss forty new features at the next stakeholder meeting? What do you cover in your stand-ups?
AI shifts effort from coding to reviewing. Except the review schedule has been decoupled from human effort and timing. It is quite easy to instruct a few agents that go on to generate a constant flow of PRs, with each PR encompassing thousands of lines of changes across hundreds of files.
Review fatigue is real and results in developers skimming a fraction of the changes in a PR before accepting it. And they are going to accept it because they have been accepting PRs for weeks and their knowledge of the codebase is stale and getting back up to speed plus doing a proper review of the PR would take just as long as implementing the changes themselves.
The consequence of review fatigue is technical debt. Without pushback on its output, AI accumulates poorly architected code on top of poorly architected code. Eventually, an error occurs that overwhelms the context and the understanding of the AI. Slop can’t fix slop, as they say. And developers need to go back in and spend a schedule-breaking amount of time and effort to understand the codebase and implement fixes manually.
Making AI work with Agile
You can make AI work with Agile. You can tweak the methodology, you can use adopt new tools, and apply some old-fashioned discipline.
The first step in tweaking the methodology is to rethink your sync points with stakeholders. What should the unit of work look like? When does it make sense to meet and review progress?
And your sync points will depend on how you define when a unit of work is done. What does done look like when AI is generating your code?
Done should be when all tests pass and all metrics are met. All the tests and all the metrics. Because AI is trained to pass tests to the exclusion of all else (that is Reinforcement Learning in a nutshell – learning to pass tests), it can’t be trusted in how it passes tests. It will take shortcuts, it will try to shift the bar it is supposed to be clearing. This means that for every metric you need a secondary metric that detects cheating.
Your code test coverage needs to be paired with mutation testing. Performance benchmarks need to be paired with realistic data fixtures so you’re not surprised when customers start hammering your product. Find a counter-test for every test.
And once the tests pass and the metrics are met, then you still need to have humans do the review. This will be the cap on the unit of work and the limit on your throughput. It is an old fashioned “stitch in time saves nine” solution and one that should be bypassed only very grudgingly and only by deep testing and constant review (see, it’s inescapable) of the tools and processes you replace it with. And let’s admit it, the tools and processes you replace humans reviewing AI code with will always be AI reviewers of AI code.
Finally, consider what you’re going to cover in your daily scrums. Instead of individual status updates you want to be covering what reviewing needs to be done, what testing strategies are being put in place, what holes are showing up in your process that need patching.
This is a shift from building the product to managing the building of the product. And that shift is for everyone. Your developers will still write code, just much less. They will be managers of agents and monitors and arbiters of their agents’ outputs. AI flattens the workflow to two steps: Design → QA.
There is no conclusion to this
This is just a single step forward in what is a time of rapid constant change. While it does appear that AI abilities are plateauing, the software development industry is still evolving rapidly in its use of AI.
What is clear is that AI is not a super genius. It’s decisions can’t be relied on and its code cannot be trusted and we must verify, verify, verify. But sometimes we can use tools to handle that verification, and sometimes we can even use more AI.
And this is impacting where software developers spend their time, and where the bottlenecks are in building software products. It is an ongoing challenge to find the new best practices for the Agile development of software in the age of AI. We hope this gives you some ideas on where you can look to improve and optimise your practices.