Copilot's Quiet Comeback
10 Apr 2025
•
Patrick Debois
Copilot’s Quiet Comeback: GitHub’s AI Tool Isn’t Done Yet
GitHub Copilot, the original AI code editor, was in a unique position: many enterprises already had GitHub as their version control and build system, making the adoption a tick in the box. Now, though, they are under pressure from the new kids on the block: the grassroots adoption is causing friction with corporate guidelines.
AI coding tools such as Cursor and Windsurf are hot thanks to their cutting-edge new features. All shiny and hot, but not that easy to introduce into enterprises: procurement requires approval of new tooling, official contracting, 3rd party processor agreements and further security evaluation. These are all valid processes, but they slow down the adoption of the new tools in an enterprise.
In this article we’d like to highlight a few (upcoming) Github Copilot features that will allow it to get back in the game. Many of the features described are experimental, but you can enable them if you’re using the VSCode insiders version of Copilot.
Time to really manage your Prompts
By now we’re all used to typing our “prompts” in the chat. After a few chats and projects you get the feeling you’re repeating yourself. People have been gathering these “specifications” in markdown files that they add to the chat.
Global Custom instructions
Recurring instructions go into .github/copilot-instructions.md
: whatever you put in this file is automatically added to every chat.
- github.copilot.chat.codeGeneration.useInstructionFiles
: controls whether code instructions from .github/copilot-instructions.md
are added to Copilot requests.
- github.copilot.chat.codeGeneration.instructions
(Experimental): set of instructions that will be added to Copilot requests that generate code.
For more details see the GitHub documentation.
Action instructions for chat
You can refine those global instructions depending on the action Copilot is taking. Customize it for code generation, test generation, commit messages, review selection and pull requests.
- github.copilot.chat.testGeneration.instructions
(Experimental): set of instructions that will be added to Copilot requests that generate tests.
- github.copilot.chat.reviewSelection.instructions
(Preview): set of instructions that will be added to Copilot requests for reviewing the current editor selection.
- github.copilot.chat.commitMessageGeneration.instructions
(Experimental): set of instructions that will be added to Copilot requests that generate commit messages.
- github.copilot.chat.pullRequestDescriptionGeneration.instructions
(Experimental): set of instructions that will be added to Copilot requests that generate pull request titles and descriptions.
These settings take the form of either a simple text or a file (markdown). Markdown files can reference other files using Markdown links ([index](../index.ts)), or via a reference #file:../index.ts . This makes prompt reusable and allows to set up a hierarchy of prompts. Note how you are not limited to markdown files and that you refer to coding examples too.
For more details see copilot's custom instructions guide, the prompt files documentation, and the custom instructions deep dive.
The instructions syntax look like this:
Prompt snippets
The final piece is to build your own prompt library made up from prompt snippets: they are prompt files that you can manually bring into the chat context. They end on .prompt.md
and by default they are stored inside your github project under `.github/prompts` making them available for everyone. You can also store them in your VSCode profile to reuse them across projects.
- chat.promptFiles
(Experimental): enable prompt file locations. Use the { "/path/to/folder": boolean }
format.
To attach a prompt file to a chat request select the Attach Context icon (⌘/), and then select Prompt.... Or alternatively, use the Chat: Use Prompt command from the Command Palette (⇧⌘P).
Fetch those docs
In addition to files locally in your codebase you can now also refer to remote files using the #fetch
in the chat prompt. This allows you to refer to external documentation or internal central information. We’ll expect more information to be optimized for prompt consumption and this is a great step in the right direction.
Hungry for more
The prompt management is a great step forward, here’s what we’re still missing:
A way to express when a prompt should be automatically added to the base prompt. Similar to a description rule based system and a crossing between the action instructions and prompt snippets.
Inline code completion does currently not take these prompts into account, but agent mode does. Wait what … agent mode?!
Combining
#fetch
with prompt snippet: Allow the use of fetching remote documents when prompts are used in real time.
Agent mode, finally!
Ask, Edit and now Agent
Finally, we see Copilot enabling agent mode. This was the most lacking feature compared to the new tools. Now next to “ask” and “edit” mode we have Agent mode. This helps in the continuous conversation in the chat and allows the AI to reason and plan the code it generates. The integration is seamless and works well.
Agent mode is enabled by setting chat.agent.enabled. If you do not see the setting, make sure to reload VS Code. Enabling the setting will no longer be needed in the following weeks, as they roll out enablement by default to all users.
For more details see the blog post on Copilot Agent Mode.
Claude models too
You might have to select another model though such as Claude 3.5 to see great results. We can also switch models: by default Copilot chat uses gpt-4o
but you can also switch to Claude 3.5 or 3.7 models. One downside of agent mode is that you can’t use images in the context yet.
For more details see Github Docs: Switching Copilot's AI Model.
Thinking as tool
In addition to the regular agent mode they also added support for a thinking tool in agent mode (Inspired by Anthropic's research) that can be used to give any model the opportunity to think between tool calls. This improves the agent's performance on complex tasks in-product and on the SWE-bench eval.
Setting: github.copilot.chat.agent.thinkingTool
For more details see the release notes on the Thinking Tool.
If all you have is an agent, everything is a tool
Model Context Protocol (MCP)
Agents really shine when we hand them the right tools. Championed by Anthropic, the Model Context Protocol (MCP) is making waves to extend the code generation. Vendors are providing their tooling so it can be plugged into the IDE. What’s great is that the Agent can autodiscover which tools are available and select the right one depending on the task. It currently supports both Standard Input/Output (stdio) and Server-Sent Events (SSE) as a transport protocol.
Once you have Agent mode enabled, you can turn it on in the settings:
- chat.mcp.enabled : true
- chat.mcp.discovery.enabled: true
The MCP: Add Server command allows you to quickly set up Docker, npm or PyPI servers. While you still have to make sure the runtimes are installed, it handles the whole process of installing automatically. The configuration is typically saved at .vscode\mcp.json
but it can be stored in the user or remote profiles too.
For more details see the release notes on MCP support.
Here’s a quick example of how you can set up two browser-based MCP services:
Using Puppeteer with Docker – Docker Hub: mcp/puppeteer
Using Playwright via npm – GitHub: microsoft/playwright-mcp
Yolo mode
Before executing MCP commands, the IDE will ask for your approval. If you really want to go into vibe coding mode you can blindly accept all tools' suggestions and have the agent execute the tools automatically. Just set `chat.tools.autoApprove : true` and feel the vibes. Use at your risk && fun!
Will it be enough?
With the category of AI coding tools exploding (See the AI native Landscape), every tool has to keep up and reinvent itself. We definitely see the commoditization of once differentiating features.
Yet, there is still a lot of improvement and innovation possible. Here’s are a few directions inspired by the 4 patterns of AI native development:
Moving from prompt management towards to writing specifications
Better UX to deal with the cognitive load and decision of agents
Discovery and exploration of new ideas and multiple parallel implementations
Turn code into learning opportunities and capturing knowledge
“There’s a way to do it better — find it.” — Thomas Edison