The other day I happened upon an X thread that got me thinking: "Do you believe in parallel coding agents? Do you think it'll become an important part of future software engineering?"

First, let me define. A coding agent is effectively running a model in a loop, editing files, running terminal commands and iterating. As agents get better developers are now pushing the boundaries to get multiple agents working on tasks in parallel, using git worktree, kanban visualizations, sub-agents and other techniques to really squeeze the juice out of these new agent tools.

But whilst we push the boundaries of agents, we must solve the ‘trust’ problem. We cannot hand over end-to-end critical work to an agent without a human-in-the-loop. If we get parallelism right, coding agents are a huge productivity opportunity. But first we have to reckon with helping our developers to see and ‘believe’ in agents and the possibility of parallel agent work.

But what does this all mean for platform teams? Aren’t agents a developer concern?

Not exactly. With growing agent autonomy we’re also now seeing agents being shifted to the cloud, called ‘parallel agents’, ‘background agents’ or even ‘async agents’. Agents that are writing code in remote environments. This brings these agents very squarely into the domain of most platform engineering teams who must now become the architects of what agents access, where they run, and how we put ‘human-in-the-loop’ reviews into our workflows. 

That said, where can our teams start investing today? Aside from keeping an eye on the latest AI lab improvements and continuing to do our own evaluations. There are three key areas: 

Building our platform API first - Agents are powerful in their ability to ‘function call’ and make use of ‘tools’. Despite much fanfare around MCP we now see that agents function well with CLIs that are both discoverable and also occupy less no context window. For our platforms, we’d be wise to adopt the infamous Bezos API mandate: “All teams will henceforth expose their data and functionality through service interfaces, and anyone who doesn't do this will be fired.”

Sandboxing our environments - To maximize agent autonomy we need to be able to run agents with "dangerously skip permissions" or “yolo” mode enabled, but without fear of repercussions. That can’t work without a sandbox or isolation. At the very least developers should be using Dev Container to isolate tasks and protect their machines when delegating work to agents, this will also make the shift to cloud based agents easier in future. 

Adhering to fine-grained access - Finally, advanced agent autonomy will demand fine-grained permissions. We need to ensure our agents can work effectively with access to systems without introducing catastrophic failures. So this means expediting our secrets management projects. 

The shift to agents, particularly agents that work in parallel and in the background, that stretch the limit of human attention will push platform teams to completely re-think our tooling and infrastructure foundations. I also get that there’s a lot going on in the AI space currently and ‘agent’ is certainly a loaded term. If you want to catch up on the trend, check out the developers guide to background agents that I put together which covers the foundational concepts that should bring you more up to speed. 2025 continues to be a wild one.