If 2024 was about AI assisting developers, 2025 was about AI acting as developers.
It started quietly in February when Anthropic released Claude Code — a terminal-based coding agent that felt genuinely different from the IDE-integrated assistants we'd grown accustomed to. By May, we were already documenting best practices for agentic coding, as developers discovered workflows that let them fire off specs and return to completed implementations.
Then OpenAI responded. Their Codex CLI launched in April, open-sourced and model-agnostic — a direct shot across Anthropic's bow. As one developer put it at the time: "First Canvas, then Codex. Anthropic ideates, OpenAI copies."
But the real story wasn't about any single tool. It was about what agents became. Claude Code broke out of the terminal in October, arriving on the web and mobile. Cursor shipped background agents that work while you sleep. And by December, Claude Code came to Slack — meaning a bug report in a team chat could trigger a coding session and return a pull request, all without leaving the conversation.
The shift was profound. Chat became the coordination layer. Agents became team members. And the boundary between "talking about code" and "writing code" started to dissolve.
We explored this evolution throughout the year with guests like Tessl’s Maksim Shaposhnikov, who walked through the spectrum from IDE-based agents to terminal-driven autonomous systems, and Yaniv Aknin , who built an agent in 100 lines of code to compare Claude, Codex, and Gemini head-to-head.
Early in the year, "vibe coding" was everywhere. Developers were shipping prototypes in minutes, prompting their way through problems, accepting large AI-generated changes with minimal review. It was exhilarating — and, as we learned, dangerous.
The wake-up call came in July when entrepreneur Jason Lemkin used Replit in a vibe-coding exercise only for the AI to ignore a code freeze, fabricate data, and delete an entire production database. The incident sparked a broader reckoning: vibe coding was brilliant for exploration but catastrophic for production.
The answer wasn't to abandon AI-assisted development. It was to bring structure back into the loop.
AWS validated this thesis in a big way with the July launch of Kiro, an agentic IDE built around spec-driven development. Their tagline said it all: "From vibe coding to viable code." Meanwhile, GitHub released Spec Kit in October, bringing spec-driven workflows to Copilot, Claude Code, Gemini CLI, Cursor, and Windsurf.
The pattern became clear: the developers getting the best results weren't just prompting — they were specifying. They wrote lightweight specifications before handing work to agents. They invested in tests. They treated AI like a talented but over-eager junior developer who needed guardrails.
Gene Kim captured this perfectly in his AI Native DevCon keynote, "Vibe Coding For Grownups." Drawing on his collaboration with Steve Yegge (who produces 12,000 lines of tested code per day using 3-4 AI agents), Kim outlined both the promise and the pitfalls: AI-generated code can lead to "eldritch horrors" of tightly coupled, incomprehensible architectures. The antidote? Modularity, rapid feedback loops, and — crucially — specification.
If agents were the protagonists of 2025, the Model Context Protocol was the infrastructure that made them valuable.
MCP launched in November 2024, but 2025 was when it went mainstream. By year's end, the MCP SDK had crossed 8 million downloads. Every major player had an implementation. And developers were rushing to give their AI agents tools to read files, access GitHub, query databases, and interact with external services.
GitHub's official MCP server launched in April — a complete rewrite in Go that enabled agents to interact with repos, PRs, and issues through a standardised interface. As we noted at the time, "AI is no longer limited to code generation via prompts — it's now extending across the entire software development lifecycle."
We spoke with Steve Manuel of Dylibso, who described MCP as "the USB-C for AI" — a standardised, model-agnostic interface for connecting models to tools, data, and services. The promise was compelling: build once, run anywhere.
But with adoption came risk.
Liran Tal's warning landed in December like a cold shower: "The 'S' in MCP stands for Security. It doesn't exist yet." His research uncovered tool-poisoning attacks, prompt-injection vulnerabilities, and classic code-injection bugs in popular MCP servers. A severe RCE vulnerability in the Framelink Figma MCP demonstrated that these weren't theoretical concerns — they were occurring now.
The takeaway wasn't to avoid MCP. It was to treat MCP servers like any external integration: scope tokens, rotate credentials, sandbox third-party code, and maintain an allowlist of approved servers. The protocol itself was sound. The implementations needed work.
One of our proudest achievements this year was bringing the AI Native Dev community together — twice.
DevCon Spring (May 13) was our second virtual event, designed around what attendees asked for: more live demos, more practical applications, more connection. Highlights included Mathias Biilmann (Netlify CEO) on why Agent Experience matters, and a live vibe-coding session where Petra Evans — who had never written code before — got the classic game Snake running after some back-and-forth prompting. "We're in an age where we can make software development more accessible and more joyful to many more people."
DevCon Fall (November 18-19, NYC) was something bigger: our first in-person event. Industry City in Brooklyn. Two full days. Workshops, exhibitors, engaging talks, a hackathon, keynotes, and a conference party.
Guy Podjarny opened with a keynote on the journey from vibe coding to AI Native Dev as a craft. Gene Kim followed with his instant-classic talk on vibe coding for grownups. René Brandel shared how his team hacked 7 YC apps in 30 minutes by exploiting vulnerabilities in AI-generated code. Liran Tal delivered a live MCP security exploit demonstration that had the room riveted.
The response was overwhelming: "Absolutely loved the event! Great people, vibes and delicious food." "AI Native DevCon content is ." "I'm astonished that all the talks are up already for viewing."
Every session is now available on demand.
As we close out 2025, the shape of AI-native development is becoming clearer. Agents are here to stay — but they work best with structure, specification, and human oversight. MCP has become essential infrastructure — but security remains an open problem. Vibe coding unlocked creativity — but viable code requires discipline and structure.
The developers thriving in this new world aren't the ones blindly trusting AI output. They're the ones who've learned to be architects: defining intent, setting constraints, reviewing work, and iterating with purpose.
2026 will bring new models, new tools, and new challenges. We'll be here to cover all of it.
Thanks for being part of the AI Native Dev community this year. Here's to building the future — together.
Want to stay up to date? Subscribe to our newsletter for weekly insights, or join us on Discord to connect with developers embracing AI-first software development.
Originally posted onainativedev.io