
Custom agents land in Amazon Q Developer CLI, bringing task-specific AI workflows to the terminal

AWS is giving developers more control over its AI coding assistant, rolling out customizable agents inside the Amazon Q Developer CLI. The feature closely follows Claude Code’s debut of AI subagents, and similarly enables teams to adopt a more “modular” approach to development workflows, defining task-specific assistants with tailored permissions, tools, and context.
For context, AWS launched Amazon Q back in 2023, serving as an AI–powered assistant “for work.” This included a specialized experience for software developers, dubbed Amazon Q Developer, that provides coding help, testing support, troubleshooting, and AWS guidance inside IDEs and the AWS Console.
Then back in November, 2024, AWS open sourced Amazon Q Developer CLI, bringing the same functionality to terminal environments. Want to generate code, debug issues, or get AWS guidance without leaving your shell? Enter Amazon Q Developer CLI.
With customizable agents now in tow, developers can spin up purpose-built versions of Q Developer scoped to particular tasks. This could be anything from refactoring code or managing dependencies, to reviewing an application’s security posture, each with its own set of tools, file access rules, and contextual awareness.
How to create custom CLI agents in Amazon Q
Previously, if you were using Amazon Q Developer to review code for security issues, or troubleshoot a failing build, you would have to manually reconfigure the context or prompt it afresh each time. With customizable agents, developers can predefine those workflows in configuration files, specifying which tools the agent can access, what parts of the file system it’s allowed to touch, and what contextual information (e.g. static files or dynamic hooks) it should reference.
So, how do you go about creating custom agents? It all starts with a simple JSON definition. Developers give the agent a name, description, and instructions, then spell out its capabilities, for example, granting it fs_read
access to scan project files but restricting `fs_write` so it can’t modify source code. You can also tell it which files to always include, and which to add dynamically via context hooks.
AWS’s documentation shows several examples, including a “development workflow” custom agent that bundles together tools like fs_read
, fs_write
, execute_bash
, and Git. It specifies static resources to load automatically, such as README.md
, package.json
, and docs/
, and sets up a hook to run git status
so the agent always has up-to-date repository context.
Once defined, you can call it directly in the terminal with a command such as:
# Start a chat with Amazon Q
q chat \
# Tell Q which custom agent to load
--agent development-workflow
From there, every time you invoke it, the agent knows its role, scope, and resources, with no manual reconfiguration required.
How MCP supports custom agents
Back in April, AWS added support for Anthropic’s Model Context Protocol (MCP) to the Amazon Q Developer CLI, bringing a broader array of external tools and data into the assistant’s workflow. This integration lets developers connect MCP-compatible servers, such as database schemas, observability platforms, or custom services, directly into Q Developer. The promise here is more accurate code generation, context-aware queries, automatic documentation, and real-time data access.
With custom agents now on the agenda, AWS builds on this MCP foundation by letting developers not only connect external resources, but also shape how Q Developer interacts with them. It pushes Q Developer closer to an industry-wide vision of modular, portable AI assistants that can plug into standardized protocols, rather than siloed ecosystems. It’s also a sign, perhaps, that AWS is betting on openness and interoperability as much as raw capability.
Rick Crelia, a cloud infrastructure engineer at Bill.com, described the arrival of custom agents as a “big step forward to making Q more useful for agentic workflows, without having to code agents yourself like you would with a custom LLM-as-a-service solution via Bedrock.”
At the same time, he highlighted one key limitation: chaining multiple assistants still requires manual effort. “One can easily imagine constructing multi-agent workflows using this new feature,” he wrote on LinkedIn. “Although the stitching together of agentic handoffs is currently still left up to the user.”
Despite that caveat, his own experiments left a strong impression. Using a custom “AWS general” agent that pulled in documentation, knowledge, and diagramming MCP servers, Q not only generated an accurate diagram for an AWS CloudFormation stack, but also produced two additional diagrams, an architectural overview and a network flow, all from a simple prompt. “My jaw dropped in disbelief,” Crelia wrote. “It was nothing short of inspirational.”
Crelia also pointed to a practical improvement in how MCP servers are handled: they are now scoped per agent rather than globally, which has ramifications for performance and efficiency in day-to-day development workflows.
“The more MCP servers you have defined globally means increased startup latency for Q chat sessions, not to mention that every MCP server loaded can cause Q to perhaps spend unnecessary processing time determining if the work it is performing needs to use one of the MCP servers and related tool operations,” he said. “Now, you can have specific MCP scoping for the task at hand.”