News

News

Exploring MCP By Building My Own

21 Jul 2025

Zachary Galbraith

Coding a Personal Agent with MCP - Resources, Tools, and Prompts
Coding a Personal Agent with MCP - Resources, Tools, and Prompts
Coding a Personal Agent with MCP - Resources, Tools, and Prompts

The Model Context Protocol has become important in an increasing number of LLM workflows since its release in November 2024. Recently, I vibe-coded my own MCP server to make a daily schedule based on my goals and todos. I learned a lot about what capabilities have been added to the Model Context Protocol standard since its initial introduction, and the new possibilities they unlock.

What is an MCP Server?

First, let’s review the basics. MCP servers allow developers to expose three types of abilities to an LLM: tools, resources, and prompts. With these capabilities, you can make your LLM much more powerful, integrating with applications, gathering context, or storing memory.

Resources

Resources are read-only pieces of information that the model can access. This could be a markdown file, a Slack thread, or a database query result. If you’ve used tools like context7, you’ve already interacted with resources. They're how models gather information outside of their training data.

Tools

Tools are functions that the model can call. Tools take structured input and deterministically perform an action, finally returning a structured output. An example of this would be switching on the lights in your room.

Prompts

Prompts are lesser known than resources and tools, but are still very useful. They allow you to provide information about your MCP server in the LLM’s system prompt. For example, you could instruct the model to get the current time before running your tool.

Here’s an example of what the flow of getting MCP server context would look like for Claude Desktop:

Communication between the client (Claude Desktop, for example) and the server happens over JSON-RPC, meaning formatting is predictable and easy to debug. Below is an example of what a client call and response might look like:

Request from the client (e.g. Claude Desktop → MCP server)

{
  "jsonrpc": "2.0",
  "id": "1",
  "method": "callTool",
  "params": {
    "tool": "generate_schedule",
    "input": {
      "goals": ["write blog post", "attend team sync", "gym"],
      "available_hours": ["09:00–12:00", "14:00–18:00"]
    }
  }
}

Response from the MCP server

{
  "jsonrpc": "2.0",
  "id": "1",
  "result": {
    "output": {
      "schedule": [
        { "time": "09:00", "task": "write blog post" },
        { "time": "14:00", "task": "attend team sync" },
        { "time": "16:00", "task": "gym" }
      ]
    }
  }
}

New Capabilities

The June 2025 spec update added lots of new features with a ton of interesting potential. These changes, while seemingly small, can make a big difference in LLM workflows, especially agentic ones.

Structured Tool Output

Before, there were no guidelines for how MCP tools return responses. They could say they were going to return schedule, but actually return num_of_apples. With this update, MCP tools can provide a schema, an outline of how their responses will be formatted. Then, if the output of a server doesn’t match the schema, it’s marked as invalid.

Elicitation

This is by far my favorite addition. With it, servers can pause workflows to ask for missing context with a question or series of questions. Instead of hallucinating API usage and capabilities, your LLM can simply pause, ask questions, and continue. This is like the question phase of deep research in ChatGPT, if you’ve used that.

Resource Links in Tool Outputs

Tools can now return links to their resources. For example, a research MCP tool can now return a link to its research summary after finishing. The links are type-safe and subscribeable, allowing smooth workflows.

What You Can Build With MCP Today

With the plethora of coding assistant tools available, wiring up your own MCP server is very approachable, no matter how technical you are. Even one or two customized MCP tools can make your LLM feel much more capable. Here are a few examples of useful projects:

  1. Daily Planner – This is what I’m currently working on. Expose your to-dos as a resource, have your LLM ask a few questions about your priorities, then use a generate-schedule tool to place them in open time slots in your calendar.

  2. Memory – My favorite current MCP server is my memory server. I strongly encourage you to either make your own or install a pre-made memory MCP server. A simple version of this could be a search_info tool that looks through your digital files to gather information about you.

  3. Home Automation Agent – Give your LLM access to smart home APIs like set_lights and lock_doors. Ask your LLM if the bathroom lights are on, which doors are unlocked, or ask it to make a schedule to dim the lights according to your sleep schedule.

Here’s an demo of my Daily Schedule MCP Server:

Try It Yourself

You don’t need to be an expert to build a useful MCP server! Start small with one tool referencing local files, and gradually move up to more complex, higher-leverage servers. The official website has a few examples you could copy.

AIND Newsletter

AIND Newsletter

AIND Newsletter

Datadog CEO Olivier Pomel on AI Security, Trust, and the Future of Observability

Visit the podcasts page

Listen and watch
our podcasts

Datadog CEO Olivier Pomel on AI Security, Trust, and the Future of Observability

Visit the podcasts page

Datadog CEO Olivier Pomel on AI Security, Trust, and the Future of Observability

Visit the podcasts page

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

JOIN US ON

Discord

Come and join the discussion.

Join

JOIN US ON

Discord

Come and join the discussion.

Join

JOIN US ON

Discord

Come and join the discussion.

Join