Insights Generative AI| Technology Agentic Coding For Teams – Tools and Techniques
Generative AI
|
Technology
Jun 27, 2025

Agentic Coding For Teams – Tools and Techniques

AUTHOR

James Wondrasek James Wondrasek

AI coding assistants have advanced from providing smart autocomplete to building complete albeit simple products. This advance has been fuelled by a combination of improvements in model quality and coding-focused training along with new tooling that supports rapid code development. 

We’re going to cover the different levels of usage with AI coding assistants and go into detail on the key strategies that developers are using to multiply their productivity with these tools. 

We’ll also discuss how context and compute impact results, share practical strategies for teams and point you to sources for more in-depth information.

 

The Tools – plenty to choose from

There are currently two main categories of coding tools – first generation, IDE based tools like Github Copilot and Cursor that are in a constant race to maintain feature parity with each other while also doing their best to integrate ideas from the second generation of coding tools – the agent-based paradigm spear-headed by Claude Code.

This paradigm is starting to be referred to as Agentic Development Environments (ADE).

There are also browser-based tools like v0, replit, lovable and bolt.new, but we will be sticking to tools that are likely to be used by teams working on substantial, local codebases.

Below is a non-exhaustive table of AI coding tools that we examined while writing this article. 

 

IDE Based ADE Based Open Source
GitHub Copilot

Cursor

Windsurf

Devin

Amazon Q Developer

Trae

Continue

Tabnine

Augment

Amp

Claude Code

OpenAI Codex

Gemini CLI

Warp

Factory

Jules

Cline 

Roo Code

Aider

Goose

Continue.dev

OpenHands

Plandex

 

Levels of AI Coding Assistant Use

Different tasks and different developers require different approaches to using AI. Sometimes fine-grained control is needed. At other times, for well defined problems and “boilerplate”, an AI coding assistant can shoulder more of the effort.

We’ve broken down the use of coding assistant to 4 levels:

Autocomplete – Line level AI-assistance

This style of coding assistant usage is good for working in existing codebases and making multiple edits or refactoring. It is a feature of the leading IDE-based tools. 

A good AI autocomplete can fill in boilerplate like type information and assist with repetitive code such as mapping values between objects or marshalling and un-marshalling data formats.

It can also predict where your next change needs to be made, allowing you to jump to edit spots. For example, adding a typed argument to a function definition will lead to the required import statement at the top of the file.

For more detailed additions, where some mind-reading would be required, writing a short comment about the next step in the function you’re writing can prime the autocomplete enough for it to provide a first pass you can craft into the form you need.

Implementations

Pair Programming

The next level up uses IDE-based AI coding assistants like Cursor, Windsurf, Cline, and Roo. It operates at the function level, instructing the AI in writing blocks of code, and makes use of the chat panel of the IDE to instruct the coding assistant and manual edits in the file windows to tweak generated code.

We call this “Pair Programming” because code is written in dialogue with the coding assistant, with the developer moving between prompting in the chat interface and revising code that the AI writes.

Getting the best performance out of the coding assistant requires giving it all the background knowledge about the project, or the particular task you’re working on, that it will need. It will know that if the file is typescript that it has to code in typescript, but it won’t know which libraries you want it to use, or what other APIs/sub-systems it has access to.

The developing standard for providing this information is to use “Rules” files. Coding assistants each have their own file or directory of files where they look for instructions to load into their context at the beginning of a session or a new conversation.

Rules can provide guidance on coding conventions, project structure, library preferences, commands to perform or any other information or action you need.

You can even use the coding assistant to update or write new rules as the opportunity (or problem) arises.

Each coding assistant has its own convention for rules file names and locations. Check the documentation.

 

Feature Lead

For this level we are defining feature development as anything that involves adding code across multiple files and/or integrating functionality into an existing codebase

This is where coding assistants start to offer a substantial productivity boost. It’s also where programming takes a step up the ladder of abstraction from code to specifications for the code. 

Here is a quote from Robert C. Martin in his book “Clean Code” from 17 years ago:

“Indeed some have suggested that we are close to the end of code. That soon all code will be generated instead of written. That programmers simply won’t be needed because business people will generate programs from specifications.

Nonsense! We will never be rid of code, because code represents the details of the requirements. At some level those details cannot be ignored or abstracted; they have to be specified. And specifying requirements in such detail that a machine can execute them is programming. Such a specification is code.”

At this level, typing is no longer the limiting factor on how quickly code can be produced. Instead, clarity of instruction, the specifications given to the coding assistant, and generating those specifications, is what sets the limit.

This has lead to the adoption of a technique sometimes known as “Product Requirements Document Driven Development” (PRDDD). With detailed specifications determining the success in using AI coding assistants, it turns out you can use AI coding assistants to help you write the detailed specifications you need. 

The document creation process for PRDDD follows this path:

PRD → Technical Specification → Implementation Plan → Checklists → Task lists

The PRD is created in a discussion with an AI like Gemini Pro, Claude Opus or O3 instructed to ask questions and resolve unknowns and ambiguities by asking you for clarification.

The PRD is used in a similar process to create a Technical Specification from it. Each new document is used to create the next.

It is a common strategy to use a second provider’s model to critique and refine the PRD, technical specification and implementation plan. And of course a senior developer should also review and refine them.

Next, you create as many Checklists as needed. You choose how you break down your project: services, implementation phases, etc. Aim for clarity of purpose. You want a checklist to be dedicated to one clear end.

Checklists are then turned into detailed Task Lists by the coding assistant.

The coding assistant can be prompted to turn an item on a checklist into a detailed task list for a mid-level developer (targeting a junior developer level will create too many steps or be over-simplified).

A detailed walk through of the process is available on specflow.com.

Code then verify

Then it is simply a matter of instructing the coding assistant to complete the items in a task list, marking them off as it goes.

Then, with a cleared context or in a new session, instruct the coding assistant to verify the completion of the tasks.

There are workflow tools that automate opinionated versions of PRDDD:

Claude Simone (Claude Code only)

Claude Taskmaster (All IDE-based tools)

 

Tech Lead

This level involves working at the application level and leverages Agent Orchestration instead of assistant management.

Agent Orchestration still uses PRDDD but in parallel across multiple agents. 

Depending on your coding assistant you will use either in-tool orchestration or manual Orchestration.

Tools with inbuilt orchestration to launch multiple agents (called sub-agents or tasks):

Manual orchestration is built around terminal-based coding assistants like Claude Code and OpenAI Codex. It combines Git Worktrees + tmux to work on multiple features simultaneously. This process works with any terminal based coding assistant.

Its popularity has led to specialised tools for managing manual orchestration:

 

The Two Practices That Maximise AI Coding

No matter which level of AI coding usage you are working at, there are two key practices you need to get right to get the best results from AI coding assistants are:

Managing Context

AIs are getting longer context windows, but their performance suffers as their context window fills. Managing the context window is currently a key focus of developers using agentic coding tools and the growing awareness of the impact of context window contents on agent performance is causing “prompt engineering” to give way to “context engineering”.

Concise, targeted documentation is needed to leave space for the AI to read code, write its own code into the context, reason about it, make tool calls and perform management tasks. Going overboard on “rules” files can negatively impact the quality of the code an assistant can produce, and how “agentic” it can be.

Until the tools are smart enough to optimise context for you, follow these tips to maximise information while minimising tokens:

Use sub-agents/tasks.

Sub-agents act like a fresh context window to complete a task.

Burning Compute

The more inference time compute an AI uses the better chance the result is correct. Prompt tokens and generated tokens contribute to the compute.

Chain of Thought (CoT), instructing a model to document a thinking process as part of its response, is an example of burning more compute to improve results.

Reasoning models are LLMs that have been trained to generate an intrinsic form of CoT. In Claude Code you can set the thinking budget for Claude Opus or Claude Sonnet to expend on a response using “think”, “think hard”, “think harder”, and “ultrathink” in your prompt text to control how much extra compute you want to use.

Best-of-n is another technique, where the same prompt is run “n” times and best result used. OpenAI’s O1-pro model costs more than O1 because it uses the Best-of-n approach to generate answers, making it “n” times the cost of the default O1 model. They are using the same technique for producing high quality answers from O3-pro. This increased usage of compute also means a longer time to return an answer.

Using Best-of-n smaller models can reach the performance of larger models if given enough compute via multiple runs, but there are limits to this size/compute trade-off.

All this means trying multiple times at a failed task is a reasonable strategy. But make sure you do follow up attempts with a fresh, strategically primed context including what has been tried and didn’t work. You can get the coding assistant to provide that try/fail summary before starting a new conversation.

After 3 failures you should try a model from another provider to solve the issue or to get insight on the failure.

Burning Compute & PRDDD

PRDDD uses iterative decomposition of your goals to cache compute.

Using AI to break down a task into small steps, each supported by a detailed prompt of system and process documentation, leverages the earlier compute that created the documentation.

Inference over a detailed prompt for even a simple task gives you the best chance of success by maximising compute. But you need to be sure that there is enough headroom in the agent’s context for the detailed prompt along with the agent’s thinking, tool responses and file changes in order to get the best results.  

Everyone wants to use less compute to save money, but using more compute can get you single-shot success instead of burning more compute (and time) iterating over poorly (cheaply) specified tasks.

Starting a fresh session and instructing the coding assistant to verify tasks it has completed spends more compute while using a shorter context providing better coherence and better outcomes.

First you do it, then you do it right

This is a technique that builds on the idea of burning compute as well as the old engineering adage: “First you do it, then you do it right, then you do it fast”.

Start your code change in a new branch. First use the agent to make a plan for the executing the change. Have the agent maintain an append-only log where it records files used, decisions made, the questions it comes up with, the answer to those questions and any surprises while it executes the plan. Once the coding task is completed commit it and close the branch. Then have the agent review the diff and update the plan with any insights. Finally, roll back to before the branch and then re-run the code change with the updated plan and the log to guide the agent in a second run through.

Practices for Teams

Worth Reading

The sources below cover the current status quo in best practices for using AI coding assistants as of June 2025. They are worth reading. The AI High Signal list on Twitter is a good place to watch for the emergence of new techniques and tools and the AI News newsletter delivers daily summaries of trending topics.

AUTHOR

James Wondrasek James Wondrasek

SHARE ARTICLE

Share
Copy Link

Related Articles

Need a reliable team to help achieve your software goals?

Drop us a line! We'd love to discuss your project.

Offices
Sydney

SYDNEY

55 Pyrmont Bridge Road
Pyrmont, NSW, 2009
Australia

55 Pyrmont Bridge Road, Pyrmont, NSW, 2009, Australia

+61 2-8123-0997

Jakarta

JAKARTA

Plaza Indonesia, 5th Level Unit
E021AB
Jl. M.H. Thamrin Kav. 28-30
Jakarta 10350
Indonesia

Plaza Indonesia, 5th Level Unit E021AB, Jl. M.H. Thamrin Kav. 28-30, Jakarta 10350, Indonesia

+62 858-6514-9577

Bandung

BANDUNG

Jl. Banda No. 30
Bandung 40115
Indonesia

Jl. Banda No. 30, Bandung 40115, Indonesia

+62 858-6514-9577

Yogyakarta

YOGYAKARTA

Unit A & B
Jl. Prof. Herman Yohanes No.1125, Terban, Gondokusuman, Yogyakarta,
Daerah Istimewa Yogyakarta 55223
Indonesia

Unit A & B Jl. Prof. Herman Yohanes No.1125, Yogyakarta, Daerah Istimewa Yogyakarta 55223, Indonesia

+62 274-4539660