Logo

MCP: The USB-C

For AI

Back to podcasts

Redefining Developer Workflows in the AI Era with MCP

with Steve Manuel

Transcript

Chapters

Trailer
[00:00:00]
Deep Dive into MCP Architecture
[00:01:32]
Trust and Security in MCP Servers
[00:04:50]
Managed vs. Self-Hosted MCP Servers
[00:21:11]
Understanding Serverless and MCP Servers
[00:26:48]
Developing and Deploying on mcp.run
[00:29:10]
Security and Authentication in MCP
[00:33:20]
Future of MCP and AI Innovations
[00:38:31]

In this episode

In this episode of AI Native Dev, host Simon Maple and Steve Manuel, founder and CEO of Dylibso, delve into the Model Context Protocol (MCP), touted as the "USB-C for AI." They explore how MCP offers a standardized, model-agnostic interface for connecting AI models to tools, data, and services, enabling developers to build once and run anywhere. Key insights include the architecture's clean separation of responsibilities, the emergence of the Anthropic MCP Registry for better discovery and trust, and Dylibso’s MCP Run providing secure, cost-effective execution for third-party servers.

MCP is emerging as the “USB‑C for AI” — a standardized way to connect models to tools, data, and services. In this episode of AI Native Dev, host Simon Maple talks with Steve Manuel, founder and CEO of Dylibso, about what the Model Context Protocol (MCP) is, how its client–server architecture works, why registries matter for discovery and trust, and how Dylibso’s MCP Run (cp.run) offers a secure, hosted execution environment for third‑party servers. The conversation blends protocol-level clarity with pragmatic guidance for developers building AI-native applications and agent workflows.

Why MCP is the “USB‑C” for AI tools

Developers have long wrestled with divergent tool/function calling conventions across LLMs and agent frameworks. One model wants parameters described one way, another expects a different signature, and return values vary just as widely. MCP fixes this by providing a common, model-agnostic interface that any client can understand and any server can implement. Once an MCP server wraps a dataset, API, or SaaS, it becomes reusable across clients like Claude Desktop, ChatGPT, or bespoke agents — “write once, run anywhere” for AI tools.

The benefit isn’t just conceptual elegance; it’s practical leverage. Teams can build tools once and reuse them across multiple AI experiences, swap models or hosts without rewriting integrations, and compose richer workflows by mixing and matching MCP servers. This standardization compresses integration time, reduces drift across environments, and enables a new layer of portability for AI-native applications.

Steve notes that, from the moment MCP was announced (November 2024), it looked like the plugin system for all AI software — especially if you accept the premise that all software is becoming AI software. Dylibso adopted the protocol early, layering it onto their isolation stack to bridge secure plugin execution with the new MCP ecosystem.

Inside the MCP architecture: host, client, and server

MCP follows a familiar client–server pattern, but with a twist designed for LLM workflows. The “server” side encapsulates access to upstream systems: databases, APIs, SaaS, or custom logic. It’s the adapter that performs actions and fetches context for the model. The “client” sits adjacent to the model inside the host (e.g., Claude Desktop, ChatGPT, or your own agent runtime). The client knows how to discover and connect to servers, list available tools, call those tools with structured parameters, and manage resources.

In practice, your host environment embeds an MCP client that negotiates protocol operations such as listTools, callTool, and resource handling. The server implements the same spec from the other side, exposing a consistent view of its capabilities to any compatible client. That separation allows you to put servers wherever they make sense — local, remote, or hosted — while keeping the model-side logic thin and portable.

For developers, this architecture yields clean responsibilities. If you own a data source or service, you publish an MCP server as the canonical access point. If you’re building an agent or AI feature, you configure your host/client to discover and call the right servers. The result: tooling that’s predictable to integrate, safer to reason about, and simpler to reuse across applications and models.

Discovery and the new Anthropic MCP Registry

Early MCP adopters had to spelunk GitHub to find useful servers, often wiring up local CLIs or executables as ad hoc tools. Discovery and trust were real pain points. That’s changing with the new official Anthropic MCP Registry and the surrounding community effort: a universal index that points to MCP servers hosted across registries, GitHub, and first-party providers.

The registry’s design supports sub-registries — vertical catalogs for domains like marketing, developer tools, or financial services — all federated to a common backbone. Beyond making servers easier to find, the registry aims to improve identity and provenance so developers can verify who’s publishing a server and assess basic trust signals. While quality and security vetting are ongoing challenges, centralizing listings and identity is a major step forward.

Practically, this means developers can discover higher-quality servers faster, evaluate them in a consistent format, and incorporate them into hosts with fewer surprises. If you’re publishing a server, align with the registry’s guidelines: document tool signatures and parameters, clarify authentication, version your API, and establish clear ownership. As first-party providers like PayPal and GitHub publish official MCP endpoints, the registry will increasingly act like DNS for the AI tool layer — a findable, trustworthy directory of capabilities.

MCP Run (cp.run): hosted, secure execution for third‑party servers

Dylibso’s MCP Run (cp.run) moves beyond discovery to safe execution. It hosts user code that implements MCP servers in a highly isolated WebAssembly (Wasm) environment. Instead of spinning up heavy containers or VMs, cp.run executes small, well-contained functions, delivering strong isolation with favorable performance and cost characteristics. For developers evaluating third-party tools (or publishing their own), this reduces the “blast radius” if something goes wrong.

This is valuable because the reality of third-party code is messy: not every server will be first-party, audited, or airtight. By sandboxing execution with Wasm, cp.run mitigates risks that would otherwise demand full code review or dedicated infra. Developers can spin up servers quickly, run tasks against them, and iterate on workflows without committing to long-term hosting or security overhead from day one.

MCP Run also aids experimentation. You can combine servers from the registry, prototype multi-tool agent flows, and harden your setup as you graduate to production. When a server becomes core to your stack, you can migrate it to your own infrastructure or adopt a first-party hosted endpoint. MCP’s standardized interface ensures that move is low-friction: the client/host doesn’t change, only the server’s location and identity.

Security, trust, and enterprise adoption

Security is the number-one concern Simon and Steve highlight. The parallels to open-source supply chain risk are real: unvetted code, dependency vulnerabilities, and malicious packages. The MCP ecosystem is actively addressing this with identity in the registry, stronger sandboxing via Wasm isolation, and the growing trend of first-party MCP endpoints published by the services themselves. Over time, the “trusted-by-default” path will look a lot like HTTP today: use official endpoints for core integrations; reserve third-party servers for specialized needs with appropriate sandboxing and guardrails.

Enterprises are already piloting and adopting MCP across sizes — from startups to large organizations. The early 2025 wave of client support made MCP visible to non-technical users, and that visibility is accelerating demand. Teams are layering MCP into agent frameworks, IDE assistants, and internal AI copilots, typically starting with read-only tools and narrow scopes before expanding to write actions with tighter policies and audit.

Actionably, developers should adopt a security-first posture:

  • Treat servers like any external integration: scope tokens, rotate credentials, and use least privilege.
  • Prefer first-party MCP endpoints where available; otherwise, sandbox third-party code (e.g., via Wasm) and monitor aggressively.
  • Maintain an allowlist of approved servers, pin versions, and capture audit logs of tool calls and resource access.
  • Validate inputs/outputs at the client/host boundary to prevent prompt injection or unsafe tool invocation.
  • Run staged rollouts with feature flags; measure tool reliability and latency before scaling.

Key Takeaways

  • MCP standardizes tool and resource access for AI, delivering “write once, run anywhere” portability across hosts, models, and agents.
  • The architecture cleanly separates responsibilities: servers wrap upstream systems; clients live next to the model in the host, handling discovery, tool listing, and calls.
  • The Anthropic MCP Registry improves discovery and identity, enabling sub-registries and a more trustworthy ecosystem for server listings.
  • Dylibso’s MCP Run (cp.run) provides hosted, Wasm-based isolation for third-party servers, making experimentation safer and cheaper without container/VM overhead.
  • Security best practices are essential: prefer first-party endpoints, sandbox third-party code, scope secrets, and log/audit tool calls.
  • Enterprise adoption is underway; start with read-only tools, validate performance and reliability, and then expand to write actions with careful policies.
  • For developers, the playbook is clear: discover servers via the registry, prototype in a hosted sandbox like cp.run, and harden as you move to production — all without rewriting tools when you change models or hosts.

Chapters

Trailer
[00:00:00]
Deep Dive into MCP Architecture
[00:01:32]
Trust and Security in MCP Servers
[00:04:50]
Managed vs. Self-Hosted MCP Servers
[00:21:11]
Understanding Serverless and MCP Servers
[00:26:48]
Developing and Deploying on mcp.run
[00:29:10]
Security and Authentication in MCP
[00:33:20]
Future of MCP and AI Innovations
[00:38:31]