.webp)
Hey there! Welcome to Platform Weekly. Your weekly screw of the platform engineering bulb. Every week, we zoom into another area of platform engineering gold from community updates, to lessons and best practices. This week we’ve got another banger guest newsletter from Lou Bichard
- How to build a CI from the ground up (as good as the hyperscalers)
- How to make self-service infrastructure 10X easier for developers
- Another next level AI agent for the enterprise!?
AI agents need infra (and that's now your problem)

The industry just leaped from "AI suggests a line of code" to "AI writes entire pull requests while you sleep" practically overnight. But I’d forgive you for missing the announcements as keeping up with AI right now is like drinking from a firehose. Let me bring you up to speed on what’s coming for your infrastructure.

In May, Google dropped Jules, an AI agent that can autonomously handle complex coding tasks in the background. OpenAI launched Codex that spins up its own development environments to work across entire codebases, writing and testing code without supervision. And Gitpod released Ona, designed specifically to run these autonomous agents securely within enterprise infrastructure.
I can guarantee that in the next few months – or possibly even already – you're going to get tasked with evaluating these background coding agent tools. If you don't make that decision soon, there's a good chance your developers are already making it without you right now.
Your new AI teammates will need somewhere to live
So what exactly are these new agent tools, and how do they work?
Background agents work like having an AI teammate you can assign tasks to and walk away from. Instead of sitting in your editor making suggestions, you give them a well-defined goal – "add authentication to this API," "migrate this component to TypeScript," or "write unit tests for this module" – and they go off and handle it independently.
You might trigger them by assigning a GitHub issue to an AI agent, messaging them in Slack, or using a dedicated interface where you describe what you want done. The agent then spins up its own isolated development environment in the cloud, clones your repository, writes the necessary code, runs tests, and eventually opens a pull request for your review.

They're surprisingly capable at handling routine but time-consuming tasks: refactoring legacy code, updating documentation, fixing bugs based on error logs, or implementing well-specified features. But they're not entirely magic (at least, not yet) as they can struggle with ambiguous requirements, complex architectural decisions, or tasks that require deep business context. As they work autonomously you lose the step-by-step visibility you get with traditional coding assistants which means you now need robust infrastructure and processes in place.
The most important difference to code assistants though is that background agents can't just run on a developer's laptop. To run in the background and in parallel they need environments where they can clone repos, install dependencies, run tests, and iterate on code.
In short, these agents need somewhere to actually run.
Another major challenge for platform teams is that most AI coding tools are built by YC-backed startups targeting other Silicon Valley companies. If you're working at an enterprise (and let's be honest, most platform teams are), these tools aren’t designed for your reality. There's no way sending source code to an insecure LLM is going to pass your security team review.
Your CISO-approved checklist for background agents
When evaluating AI agents, you need to think beyond "does it write good code?" The key questions are: Does your source code leave your network? Can you audit what the agent accessed and modified? Does it integrate with your existing SSO and repository permissions?
Each agent task should run in its own sandboxed environment with network policies you control. For larger organizations, you need multi-tenancy so different teams can't see each other's work. And critically – where does the actual processing happen? Most current tools ship your code to third-party clouds, which is often a non-starter for enterprises.
Nothing gets through to the enterprise unless it gets passed security first. That means you will always need more than just an off-the-shelf AI tool. Take Ona, for example, it's built as a privacy-first software engineering agent specifically for enterprise infrastructure. Any other approach to AI tooling isn't going to deliver what you need (or potentially even get approved).
The uncomfortable truth is that this AI reality is now upon us. As platform engineers, we can either drive this change and be the innovators who shape how our organizations harness AI, or we risk being seen as the laggards who slow down progress and prevent our teams from moving forward. The moment is here for you to choose: what kind of developer experience do you want to build? Will you lead the transformation, or let decisions be made for you?
If you’re curious about Ona, me and the other creators will be doing some demos at PlatformCon in London & New York (and giving away 3D printers and Steam Decks). See you in London on 25th June (Booth #12) or in NYC on 26th June (Booth #9).
