Amp, the autonomous coding agent that recently spun out of Sourcegraph as a standalone company, has introduced a new agentic code review feature designed to help developers examine and understand code changes with greater structure and depth.
Released as part of the Amp extension for VS Code, the review agent pre-scans diffs and provides summaries, guidance, and actionable feedback to support the review process.
The launch follows Amp’s first stab at in-editor code review tooling back in October, which introduced a review panel powered by a single, “one-shot” large language model request to summarise and navigate changes. According to Amp, the new review agent represents a step-change from that earlier approach, using a dedicated, review-focused agent and toolset to perform deeper analysis, surface more potential issues, and filter out low-signal feedback.
Amp, for the uninitiated, is an AI coding agent built to help developers generate and modify code. It operates through both terminal interfaces and editor extensions, and is designed to tackle multi-step workflows that stretch beyond isolated prompts.
The company has been on a fairly steady release run of late, rolling out features such as Thread Map, which visualises how agent conversations relate over time, an ad-supported free tier that lowers the barrier to entry, and public developer profiles that allow users to share coding sessions.
The new code review agent builds on that recent feature sprint, extending into one of the most ubiquitous development tasks: examining and accepting changes. Code review is a cornerstone of collaborative software development, but it can be time-consuming and cognitively demanding — especially when reviews span many files and complex diffs.
With the Amp VS Code extension enabled, developers can launch an agentic review session directly from the editor, opening a dedicated review panel scoped to a specific task, commit range, or diff.

After the initial scan, Amp’s review panel surfaces a change summary alongside a recommended review order, with per-file churn indicators to help prioritise where to look first. Developers can then request a full agentic review from the same panel and keep moving through diffs while Amp generates the accompanying summaries and commentary.

Amp’s push into agentic code review mirrors a broader shift underway across AI-assisted developer tools.
As coding agents take on more responsibility for generating changes, teams are increasingly focused on the mechanisms for interpreting, assessing, and approving that output. Cursor, another AI-powered development environment, recently announced plans to acquire AI code review platform Graphite. The deal was positioned around smoothing the transition between writing and reviewing code — a handoff that becomes more complex as AI systems contribute larger, more frequent diffs.
Together, these moves suggest that the next phase of AI coding tools is less about raw generation speed and more about how developers stay in control: understanding intent, reviewing changes efficiently, and deciding what ultimately makes it into production.
For now, Amp’s agentic code review agent is available only through its VS Code extension, a limitation that has prompted many in the community to question whether this might ever make it to the terminal – after all, the command-line remains a primary working environment for many developers.
Terminal support, as it happens, is very much on the roadmap, though when we might see it isn’t yet clear – Amp’s builder in residence Ryan Carson confirmed on X that the company is still exploring how the code review experience should translate to the command line.
That uncertainty reflects a wider set of open questions Amp is actively working through as it extends agentic workflows beyond code generation and into governance and review. Among them: how reviews should map to agent threads, given that a single review may span the output of multiple concurrent threads; how accepted or rejected review feedback should feed into long-term system memory; and whether those signals should influence artifacts like AGENTS.md. The terminal experience remains one of the more unresolved areas, with Amp weighing whether an agentic review belongs in a standalone, editable TUI or should integrate directly with existing terminal-native editors and diff viewers that developers already use.
Together, these questions point to the harder phase of agent design — deciding how agents fit into established workflows for understanding, evaluating, and maintaining it over time.

23 Dec 2025

17 Dec 2025

18 Jul 2025

23 May 2025

14 Nov 2025