14 Nov 20256 minute read

14 Nov 20256 minute read

The AI coding boom has grown substantively on borrowed intelligence, with companies like OpenAI, Anthropic, and Google supplying the large language models (LLMs) that power a growing class of downstream developer tools. Take Cursor and Windsurf, which have both hit lofty billion-dollar valuations by building on top of these external models to create friendlier coding environments — editors that can autocomplete code, explain bugs, or write full functions on command.
In industry parlance, they're wrappers built on someone else’s core technology. Now they're starting to unwrap, a move designed to lessen their reliance on outside model vendors and bring more of the intelligence in-house.
This month, Cursor unveiled Composer, a new in-house model anchoring the broader release of Cursor 2.0. Cursor says that Composer is a mixture-of-experts language model trained via reinforcement learning on real-world software engineering tasks, supporting long-context codebases and tooling such as terminal commands, semantic search, and file edits. Notably, the company claims that Composer delivers frontier-level coding performance at generation speeds four times faster than comparable models, enabling more interactive, in-flow coding.

It’s worth noting that Cursor isn’t entirely new to model development. The company previously launched Tab, a lightweight in-house model designed to predict code edits and improve latency inside the editor. Composer represents a more ambitious step — Cursor’s first frontier-scale model, built for reasoning across full codebases and interacting with developer tools.
That earlier work on Tab helped lay the groundwork for Composer, part of a broader mission to “keep coding delightful,” as the company put it.
“We found that often developers want the smartest model that can support interactive use, keeping them in the flow of coding,” Cursor wrote in a blog post. “In our development process, we experimented with a prototype agent model, codenamed Cheetah, to better understand the impact of faster agent models. Composer is a smarter version of this model that keeps coding delightful by being fast enough for an interactive experience.”

On the very same day as Cursor debuted Composer, Windsurf (a startup which was acquired by Cognition back in July) announced SWE-1.5, a “fast agent model” aimed at keeping developers in flow with reduced latency and higher responsiveness. That release built on SWE-1, the company’s first home-grown model introduced in May, extending its push to replace dependence on outside providers with its own tailored stack.
The company described SWE-1.5 as a frontier-size model with hundreds of billions of parameters, capable of generating up to 950 tokens per second — roughly 13 times faster than earlier systems like Sonnet 4.5. Cognition said the upgrade wasn’t just about scale but about rebuilding the agent stack itself: the harness, inference engine, and user experience were all redesigned to make development feel more immediate. As the company explained in its announcement:
“Developers shouldn’t have to choose between an AI that thinks fast and one that thinks well, yet this has been a seemingly inescapable tradeoff in AI coding so far. Our goal as an agent lab is not to train a model in isolation, but to build a complete agent. Often-overlooked components are the agent harness, the inference provider, and the end-to-end user experience.”
Together, these moves show how quickly AI coding tools are shifting from polished interfaces into full-stack platforms. What began as a race to wrap the best third-party models has become a scramble to own the intelligence underneath. For companies like Cursor and Windsurf, controlling the model layer is more than a technical upgrade — it’s becoming a matter of survival in a market defined by a handful of model providers and rising inference costs.
Put simply, without their own models, AI IDEs are vulnerable to pricing changes and performance limits beyond their control. Cursor and Windsurf seem to have received that memo. The question now is whether their home-grown systems can really match — or outpace — the giants they once relied on.