
AI TO HELP
DEVS 10X?
Why AI Coding Agents Are Here To Stay
Also available on
Chapters
In this episode
Patrick Debois, the mind behind “DevOps,” joins Simon Maple to unpack the system-level shifts AI is driving across software engineering, drawn from what he saw firsthand at the AI Engineer’s World Fair.
They also get into:
• how inconsistent codebases confuse AI
• why running agents locally is becoming obsolete
• inside OpenAI’s concept of “model specs”
A Ground-Zero Event for AI Engineers
Patrick Debois reflects on the AI Engineering World Fair in San Francisco—a conference unlike others that bolt AI onto existing agendas. This event is exclusively focused on AI-native development. Drawing 3,000 attendees and leaders across the AI tool ecosystem, it’s become a proving ground for emerging ideas in agentic coding, developer workflows, and infrastructure automation.
Agents Take Center Stage
One of the biggest shifts Patrick noted was the sheer dominance of coding agents. Where coding was once a side topic at AI conferences, it is now central. Most tooling vendors at the event are shifting toward agentic experiences—autonomous, task-driven systems that go beyond autocomplete. While industry adoption is still maturing, the tooling space is rapidly aligning around the agent paradigm.
A New Paradigm Every Six Months
Patrick warns that using AI tools the same way every six months is a mistake. Agent-based workflows have changed the game, shifting from chat-based prompting to spec-driven automation and headless execution. Tools like Claude Code are pushing asynchronous, CLI-based workflows, moving beyond the IDE as the core interaction surface.
Specs Are the New Code
One key insight: specifications are becoming central artifacts. Inspired by OpenAI’s internal use of model specs, developers are beginning to treat specs not just as input but as the source of truth. With good specs, you can align teams, generate tests, and even regenerate implementations. Test frameworks like Gherkin and Cucumber serve as bridges between intent and validation, strengthening AI alignment.
Agents Move to the Cloud
As agents take on more complex and long-running tasks, developers are pushing execution to the cloud. Local machines can’t handle the CPU load or runtime reliability. Tools like Cursor now offer cloud agents that interact with Git repos directly, and developers are shifting to containerized, sandboxed execution environments to manage trust, permissions, and scale.
Parallel Execution & The Burden of Review
AI coding is also becoming parallelized—splitting work across agents for speed or variation. But while generating multiple solutions is easy, evaluating and merging them is still a human burden. The next challenge is developing UI and orchestration tools that can assist with review, comparison, and synthesis of parallel outcomes.
CI/CD Is Shifting Left—Again
Agentic workflows are triggering a new wave of left-shifting CI/CD. Tests, code validation, and environment replication are happening earlier, closer to the developer’s editor. With containerized environments and real-time execution, feedback loops are shortening and becoming more aligned with dev workflows.
How Much Productivity? It Depends
Claims of 5x or 10x productivity from AI tools vary widely. Patrick explains that effectiveness depends on task complexity, codebase consistency, and language popularity. Simpler projects or newer stacks may benefit more, while older, inconsistent codebases confuse both humans and LLMs. Productivity gains are real but nuanced.
Constant Change, Constant Learning
The episode closes with a reminder: AI dev workflows are evolving at a pace unlike anything before. What works today may be obsolete tomorrow. Developers must embrace experimentation, stay flexible, and continually re-evaluate their workflows to stay ahead.