
MCP: The USB-C
For AI
Redefining Developer Workflows in the AI Era with MCP
Also available on
Transcript
[00:00:25] Simon: Hello, and welcome to another episode of the AI Native Dev, and we've got a fun discussion here today. Talking about all things MCP, we're gonna go into the depths of what MCP is, the various architectural pieces of MCP. Talk a little bit about something called MCP Run, which is a super interesting tool that allows you to use other people's MCP servers.
[00:00:48] Simon: It's a hosted environment that allows you to also run tasks against various MCP servers. So we'll take a bit of a deep dive into that. We'll also talk a little bit about what's new in the world of Anthropic with the new MCP registry, and talk about what MCP looks like for the future as well, for developers and our industry.
[00:01:07] Simon: And joining me on this journey is Steve Manuel, and Steve is the founder and CEO of Dylibso. Steve, welcome. How are you?
Steve: Doing very well. Thanks for having me, Simon.
Simon: Oh, absolutely. Pleasure. And whereabouts are you calling from today, Steve?
Steve: I'm in the San Francisco Bay area.
Simon: Oh, nice. The heart of AI in the US right.
[00:01:27] Steve: Exactly.
[00:01:27] Steve: We're all locked in here.
[00:01:28] Simon: Yeah. And so let me guess, you go to an AI meetup every, what about three times an evening? Is that about right?
Steve: Yeah, exactly.
Simon: Yeah. Awesome. So Steve, tell us a little bit about yourself, a little bit about your journey with, through MCP actually as well.
[00:01:45] Simon: 'Cause obviously MCP has not been out for a huge amount of time, but there's been so much innovation. I'd love to hear a little bit about how you started with MCP and how you built things like MCP Run and things like that going forward.
[00:02:01] Steve: Sure, sure. Totally. So, yeah, I kind of got started on the MCP journey by accident. The company Dylibso has, since its inception, been working on compute isolation technology through web assembly for the purpose of plugin systems, largely for making apps extensible beyond their original design and functionality, enabling an end user to customize the application experience by injecting new code into that application.
[00:02:32] Steve: Through the security model of the web assembly runtimes and code execution environments, this actually is kind of a practical, safe way to extend third party software. And so as a plugin person, when I saw MCP announced in November of 2024, like ten years ago, it felt like this was very clear to me that this was the plugin system for all AI software.
[00:02:56] Steve: And if we could accept the fact that all software is going to become AI software, then this MCP is the kind of plugin system that matters. So we very quickly were early to adopt MCP, implementing the protocol on top of our isolation wasm stack.
[00:03:16] Steve: And that since has become a whole kind of interesting dynamic for the company and has presented new opportunities for us to firstly bring our extensibility technology into a new space, but also just explore how else we can bring useful features and functionality into applications through the virtue of MCP, as the connector layer between systems and AI, large language models.
[00:03:45] Simon: Yeah. Super interesting. One of the probably overused quotes that came out of Anthropic was how MCP is like the USBC for AI. [00:04:00] And sure, that makes a ton of sense. But one of the things that makes USBC super interesting and valuable is the fact that it uses the same API, I guess, and that API obviously connects everywhere.
[00:04:15] Simon: People know how to use it. What is it that MCP solves that existing API patterns don't?
[00:04:21] Steve: Yeah. I love the analogy. And I think it makes, you know, perfect sense to explain it this way, because if you think about USBC as more of a connection port as like a, you know, a way to interconnect different things, be it a phone or a computer or a camera, the cable that connects those two devices, um, is standardized.
[00:04:46] Steve: And so if anybody has a USBC just laying around, it can be used for a variety of different devices. And the same is true for MCP. As soon as an MCP server is [00:05:00] implemented for a particular dataset, API or service, that same MCP server can be used in any endpoint, any client, whether it be ChatGPT, or Anthropic, or an AI agent that is created in a kind of bespoke manner to do a variety of different tasks.
[00:05:19] Steve: And so it really is this kind of now like write once run anywhere eventuality and I mentioning this, I think particularly 'cause you're wearing this Java shirt and allows any system to communicate with another over this standard. The problem really that at its core that I think pushed the Anthropic team, you know, to firstly devise the standard and then open source it and release it.
[00:05:48] Steve: You know, and I, I'm sure many of the people on the core team will be angry that I specifically narrowed the problem down to tools. But function calling, tool calling, tool use in agents and models is not a new capability. However, every single model and library and framework had a different way of describing how tools should be used, how the parameters are provided, how their values are returned, and a different format for kind of laying out what that tool looked like.
[00:06:18] Steve: So what is the signature? Is it in a JSON file? Is it described through, you know, just freeform text and it meant that everybody had to basically rewrite their tools, or at least the tool interface between different models or agent frameworks, and it just was very cumbersome to do that. And so I think Anthropic rightly realized that they could do something here to improve that and make it easier for people to share and reuse these implementations across a variety of different tools and services.
[00:06:56] Simon: And it's funny 'cause when they release that, you know, like you say, a couple of decades ago, in November of 2024, it was almost quite a quiet launch. It was like there was nothing huge made out of it. Yeah. But I think we looked at it and we see the raw potential that this could have on LLMs and how, how powerful, how this empowers them really to kinda like make those decisions.
[00:07:25] Simon: And exactly like you say, it's that in that standardized way that really unlocks it across models. I guess you mentioned a few terms there. Let's, let's kind of go in and demystify a few of those terms. So there's obviously the protocol. Let's talk a little bit about the protocol as well. You've got the protocol that kinda works between client and server, and then you have obviously the MCP client, the MCP server.
[00:07:45] Simon: Talk us through the architectural pieces here of, of, of where things sit, what we develop, why we would use one over the other.
[00:07:54] Steve: Sure. So, firstly, the protocol is kinda split in two segments. Like, like [00:08:00] many protocols are, especially of the client server variety. Just like HTTP, MCP is broken into a server and a client.
[00:08:09] Steve: And a server is basically the way you communicate with some upstream data source or service. You know, how do I get access to do the action or to read the email or to call some system that provides some additional context to a model. And the client is kind of this interceptor between the server and the model that is actually communicating over the protocol to get that data or call that tool or manage some resource. And the client is right next to the model. Usually they're contained inside what the spec defines as a host. And so that could be ChatGPT or Claude Desktop, or it could be an AI agent that's running that has access to tools.
[00:08:58] Steve: An MCP client is the [00:09:00] side of the implementation that knows how to discover and connect to an MCP server, and then also to call the various spec functionalities like listing tools, or calling tools, or finding resources. Sorry, go ahead. That, that's all. I think that's the okay general high level overview.
[00:09:25] Simon: Yeah, absolutely. So, when we think about the clients, obviously we, you know, we install these clients onto our, onto Claude Desktop or onto Claude Code or, or whoever, wherever we wanna, wherever we want to use MCP servers. The servers though, one of the big challenges is really identifying which servers we wanna use that discoverability of servers.
[00:09:46] Simon: And of course this is kinda like where a ton of all these registries and things like that sit, why don't you give us an intro into, into cp.run? 'Cause I think it's, I think it's. It's super cool how, I guess, you know, there's this, a number of registries out there that kinda like, you know, perform these collations of servers that you can use.
[00:10:08] Simon: The MCP run, which we'll talk about in a little sec, goes that step further and allows you to actually, you know, perform activities or tasks on top of these servers that exist. But talk us a little bit through, you know, what a registry is, how we use a registry, and a little bit about MCP run.
[00:10:27] Steve: Sure. So first, everybody should be aware that Anthropic, as well as the community group that works on the protocol, have established an official registry, which is really exciting, that supports the collection and discovery of MCP servers implemented across the ecosystem.
[00:10:46] Steve: And the goal, and I don't wanna speak for them, but I'm just gonna regurgitate a little bit of what you know is already published, is to provide a kind of [00:11:00] universal and discoverable layer of infrastructure that provides pointers and listings of MCP servers that are hosted even on other registries or out on GitHub or anywhere else.
[00:11:11] Steve: And the goal is to serve the community to also be able to build sub registries on top of this registry. So perhaps, you know, there is a vertical focused registry that's just for marketing tools or just for developer tools, or just for financial services. The goal is to build, to support more focused sub registries that can all pull from this registry.
[00:11:33] Steve: So just a shout out to the teams, largely from block, pulse, MCP, Andro and more, who have been putting in the hours to get this stood up. It's a really neat implementation, and it's all open source. So go check that out.
[00:11:53] Steve: And yeah, discovery is challenging. I think when MCP was first announced, there was this sprawl of software engineers who were basically like hooking up kind of CLI apps, local executables, into these chat applications. And your best bet was to kind of just like look through GitHub to try to find, you know.
[00:12:15] Steve: Someone's NPM library that was presented as an MCP server. And it was smart to kind of figure we need someplace to make it easy to find high quality MCP servers, but also to secure the ecosystem through identity. Make sure that whoever's publishing MCP servers can prove that they're publishing something.
[00:12:36] Steve: That they own, that is trustable. There's still a lot of work to do on the trust and quality management perspective, but at least it's a step in the right direction to have a large global universal registry to manage MCP servers. Yeah. Before Anthropic and the organizing group around the spec did the work [00:13:00] to create an official registry, many.
[00:13:03] Steve: Other companies or organizations or teams worked on registries so that we could fill the gap until there was something official. And so ourselves and many other companies have put some work into hosting registries. Ours is a little bit different in that cp.run hosts user code that implements CP servers from web assembly modules and,
[00:13:33] Steve: This allows us to execute their code in a very isolated environment and brings a level of security that just can't be matched with other technology for the same efficiency and cost and performance trade-offs. So instead of spinning up docker images or a whole virtual machine, we could just run a little function inside of a web assembly module that implements the tool calls.
[00:13:58] Steve: Not a super critical detail, but the goal has always been to kind of provide the most secure environment for third party code to execute on our platform. But we're also seeing largely that many MCP servers are moving to their own remote hosts. So PayPal, GitHub, many other companies are realizing we should just publish an endpoint like we do in HTTP that allows a more kind of agent oriented client to access and use our service.
[00:14:36] Steve: So we're very excited by this move to official MCP servers hosted by their first party provider, kind of being the eventuality.
[00:14:47] Simon: It's really interesting, isn't it? When you see, and you know, just then when you kind of talk through that, my, me and my background at Snyk, I kind of shudder every] time I hear certain things, like, you know, with my background in the open source, third party libraries with certain vulnerabilities and malicious libraries and things like that, the potential for something quite damaging is pretty big.
[00:15:12] Simon: But it's all about how do we actually put the precautions in place so that we limit the potential blast radius, what you're talking about here, where it's like it's more containerized, it's running in isolation, those kind of things. Really, really good. The idea of, and I've used MCP servers before where I've looked at other people's GitHub repos, and I'm like, oh gosh, now I've gotta do a full code review of this 'cause I wanna understand, is this actually legit or is this looking a bit dodgy.
[00:15:48] Simon: The idea of third parties running MCP servers kinda scares me a bit, but then when you know it's a validated MCP server from a trusted source, okay. Then it's potentially just another service that they're offering on the interwebs. It's how far.
[00:16:04] Simon: How far are we from people being able to truly trust what they see from MCP servers and what, in your experience, is that level of trust in the industry? So are you seeing this pickup from startups, from mid-size companies, from enterprises that are actually fully using MCP servers in production today?
[00:16:28] Steve: Yeah, definitely. I mean, it was surprising when MCP was first announced that there was a relative lull around the excitement and interest. And I think that initially it was kind of too oriented for developers. And developers quickly saw that there was potential here, there was something exciting here and we saw this explosive growth then in the earlier half of 2025.
[00:16:55] Steve: Now we're seeing that the technology and the [00:17:00] protocol have been implemented enough in a variety of different clients that they're visible to a non-technical audience. Yeah. And so what we see is absolutely at every level of company size from startup all the way up to very large, even regulated industries like banks and financial services are starting to realize that.
[00:17:22] Steve: If they're gonna give their employees access to AI for productivity, the logical next step is to also have that AI be connected to some of their internal workplace applications and services.
[00:17:34] Steve: So, we see people connecting this into Salesforce or HubSpot, or CRMs to do prospecting and lead enhancement and content generation like, you know.
[00:17:48] Steve: Writing emails for you, responding to emails. Customer service is big. There's a company called Intercom who has an agent named Finn. Finn implemented MCP so [00:18:00] that customer service teams can attach services and tools to Finn. And Finn can do more proactive data lookups or actions against the customer support queries.
[00:18:12] Steve: It's handling. So I think it's definitely escaped the kind of developer chambers and is now in the real world. And to go back to your question on trust, this poses a huge problem. Because you can't really trust any just off the shelf MCP server you find, and this is coming from somebody who runs an MCP service, who I would of course love everybody to trust us, but it's just.
[00:18:39] Steve: You can't trust anything, because you don't know what exactly is happening behind the scenes. What's actually happening when you call that tool with whatever data the model has decided to extract from your prompt and send off to that MCP server. So there is a substantial challenge still to figure out with regard to trust, but I think that the first step is actually just using the MCP servers that are from the providers themselves.
[00:19:07] Steve: Just like you would trust any normal HP API that you interface with to, you know, a service like Spotify or PayPal or Google. They should and hopefully will at some point provide a, you know, first party MCP server, because that does alleviate some of the, you know, trust concerns.
[00:19:30] Simon: Yeah, absolutely.
[00:19:31] Simon: I think it's when there isn't a provider's own MCP server that it starts, you know, bringing questions up. I know, you know, we had a good session with someone off air actually, where they were talking about how they can, how they've almost built up their own, their own.
[00:19:50] Simon: Almost like, you know, an assistant where it consumes all of their calendar information, all of their email information, all of their notions and linear data. And they have stacks of information that can kind of, you know, give the LLM information about who they are, what they like doing, what are the things that they care about in their career that they want to absolutely nail.
[00:20:16] Simon: And then it uses these MTPs to pull all that data. And of course there are so few MCP servers that are from the originating company that, you know, the actual provider company, you're relying on these third party open source tools to handle your most sensitive data. And it can cause huge productivity gains if you get this right.
[00:20:39] Simon: But you are opening yourself up a little bit. And I think I'd love to talk about the difference between almost managed versus self-hosted MCP servers here. If I'm self-hosting myself, I'm running this, maybe even locally, for example. The risk is much, much lower, but I have to trust the [00:21:00] MCP server is doing the right things with the data.
[00:21:03] Simon: I'm passing it and handling that correctly. But managed servers, managed MCP servers. There are a bunch of other valid things that will offer us as well, that our laptops and local machines just can't do at scale. Talk us through what that, you know, when a developer thinks, okay, should I go to that remote or should I host this locally or do it on my machine?
[00:21:29] Simon: In STD, uh, you know, S-S-T-D-I-O, kind of, yep. Spinoff of an MCP server. What is, what is the right choice?
[00:21:40] Steve: When I am more and more convinced that standard IO MP servers are never the right choice, right? Now, why is that? Well, unless you're writing it yourself, the risk is substantial and the risks are now at the level of your machine with the same privileges that you [00:22:00] as the user of your machine have.
[00:22:01] Steve: And so if that MCP server is just running on your laptop, it has the ability to read your environment variables, it has the ability to call the network in any fashion it likes using any protocol and any tools it likes.
[00:22:16] Steve: It has the ability to read and write to your file system, and this is like, kind of the
[00:22:22] Steve: red alert, highest severity problem that a user of an MCP server will face. So the movement of these MCP servers up to managed endpoints remotely or in the cloud removes a whole layer of security risk that is down at the endpoint, down at the client computer, the user's machine. However, there are definitely times where you kind of need to have a standard IO MCP server if you're largely working with developer tools and they need to call a CLI that's on your machine.
[00:22:55] Steve: Or if your code project is only local to your machine and an MCP server is operating on diagnostics from your IDE or something like that, you can't get that up in the cloud, unless you're using some kind of managed cloud code generated platform. I think you guys probably know a couple of good ones there.
[00:23:15] Steve: So I think largely, you know, the remote MCP servers are the more secure ones. But you also need to trust that the endpoint is authentic and you need to be able to connect to it and authenticate to it. So I don't think there's a hard and fast rule as to like, which is the better transport to use for an MCP server, whether it be standard IO or remote. I think it's just very situationally relevant and say, okay, well what kind of data am I using? Where does the data live? And what does the authentication, you know, what is the security risk? And you know, the reason all these transports exist is because they have their own time and place to use them.
[00:23:57] Steve: But I think we're moving away from a local standard IO based MCP server that executes on the user's machine. Just because there are more implementations of MCP servers remotely. And I think those are far easier to trust. Now running an MCB server remotely in your own infrastructure versus just connecting to a remote URL that is provided over the internet are also two different things. You can implement an MCP server that wraps an API because the API provider themselves don't provide an MCP server and absolutely run that yourself. Host it in your own network and connect to it through your infrastructure.
[00:24:43] Steve: We see many, many companies deciding to move in that direction just for the control and the transparency over what tools are being connected to by MCP clients that their teams are using. And to plug a product. We do actually have a self-hosted MCP platform that provides this level of transparency, auditability, and security called Turbo MCP.
[00:25:08] Steve: And this is largely a learning from providing MCP Run, which is that larger organizations and folks who are more concerned about security aren't going to trust a public proxy to take this MCP traffic from a MCP client and transmit whatever sensitive information from a database or a SaaS product through a public proxy up to its upstream service.
[00:25:35] Steve: And so in order to kind of satisfy that need of controlling that proxy and owning that traffic, we did build a self-hosted version of MCP Run. But it's a little bit more for a larger, more security-oriented enterprise customer. For individuals, you know, using public proxies is still fine. But you're definitely taking on a little bit of risk just to trust that entity that you're not going to send off my notion page data or my calendar data up to some advertising service or, if you get hacked, then well, you have all my auth tokens and all of the plain text data that was extracted from my email. That's maybe sitting inside your database somewhere.
[00:26:21] Simon: So mcp.run is fully managed then by yourselves and Turbo MCP is self-hosted that I can just come along and stand up an instance in my organization and then start running straight away against that. Correct. Right. And actually there are two, from what I saw and talking about my Java shirt, this was a blast from the past.
[00:26:42] Simon: There were two types of things that you can host there. One is MCP servers, the other is serverlets. Is that, and I kinda like, when I saw that, I thought, wow, I haven't heard or read the word serverless in many, many years. And I thought, wow, have we come that far that actually we can override the term serverless.
[00:27:02] Simon: And there's not enough contention still. And I shed a tear for that moment. But tell us what's the difference between a server and a full MCP server?
[00:27:12] Steve: Yeah, a serverlet has literally nothing to do with the spec or MCP at all. It's a term that we chose very, very early on, and if we could rewind the clock, maybe we wouldn't have used this term, but it's just too perfect of a term to not have used.
[00:27:28] Simon: Yeah.
[00:27:28] Steve: In the Java world, a serverlet is basically an implementation of a subset of HTTP so that you can reprogram a HTTP server that runs in Java with what they call the serverlet. That allows you to basically intercept an HTTP request, run some code, and then return an HTTP response back up to the host application.
[00:27:51] Steve: And what an mcp.run serverlet is, is a subset of MCP [00:28:00] that allows you to just handle, intercept a tool call and implement the tool call and respond with a tool call result. And so it allowed us to basically simplify the implementation of MCP so that a developer only had to worry about implementing the tool call functions that they wanted to support.
[00:28:19] Steve: As opposed to standing up an entire MCP server in which you need to pick a transport, you need to figure out an authentication scheme. You need to figure out many, many things that are just not the core business of implementing a tool. So yeah, servlet is kind of our toy term for the simplification of CP on cp.run and the ability to kind of go serverless. Do not have infrastructure, just ship a little function that implements the tool, just like a Java server allows you just to ship a little bit of code inside the, you know, wrapper of a HTTP service.
[00:28:56] Simon: Love it. And when are you not gonna release applets as well?
[00:29:05] Simon: Yeah. Talking of security problems. No, that makes sense. That makes sense. And so from, as a developer, if I wanted to build on top of MCP Run, or rather, I guess, if I wanted to submit, I guess, and deploy a tool on mcp.run, what does my development lifecycle here look like in terms of how I do that?
[00:29:28] Steve: So, yeah, mcp.run is actually powered by another service that our company builds called Xtp. Which is largely a plugin system as a service. So if you wanted to add a plugin system to your application, XTP makes it very easy to integrate other third party code that's from your customers or users and execute that code inside of your application.
[00:29:52] Steve: And MCP Run, you know, builds on top of this code execution technology. And so you are basically a developer who wants to publish to CP Run, interfacing with the service behind the scenes XTP. And so there's some tool reuse. You download a CLI and this CLI allows you to bootstrap into a MCP server lit project.
[00:30:18] Steve: So you pick a language you want to implement it in, whether that be JavaScript, zig, rust, go, Python, C, C plus plus, and we drop you into kind of a boilerplate project. And that project basically has two functions. One function to be implemented is the tool call handler, and the other is the description of the tools.
[00:30:39] Steve: And as the developer, you fill in the implementations. Here's a tool called, you know, get email, and it actually then goes and connects to the Gmail API or whatever. And does a search based on input parameters that you defined in the description and then returns back a tool call result. The goal is, you know, we drop everybody in these kind of very well typed boilerplate functions, so that you have, you know, rich types that help auto complete and, you know, make the developer experience rather nice.
[00:31:12] Steve: There is a caveat here that all of these implementations compile to web assembly modules, and so there are limitations that come inherently with that. But the limitations are purposeful so that you don't run into these kinds of security problems that any other form of implementing an MCP server would have.
[00:31:33] Steve: In the sense that if I'm running an untrusted MCP server and I authenticate to my Gmail through it. That MCP server has my token, my access credential to that upstream service, but it also has the ability to send my data to some other server because once the data enters the MCP server, it's somebody else's code working inside of that, and they could choose [00:32:00] to just write your.
[00:32:01] Steve: Email list to disk, or they could send it off to some third party server and you wouldn't know the better. So one of the features of WebAssembly and our particular implementation is that we require when an MCP server is registered by the developer who wants to publish it, that they include a list of domains, that the code.
[00:32:21] Steve: potentially reach out to allow, list a set of environment variables that, that code will need at runtime and allow list, you know, if they need file system, which paths or directories ahead of time that that code wants access to, to be approved. And then when the user of that MCP server installs it on MCP Run, they allow that, those domains, environment variables and file paths are
[00:32:51] Steve: made available to the code. So you're very aware as the end user that this Gmail MCP server is actually only ever talking to [00:33:00] api.google.com. Yeah. Never to any nefarious third party server that the developer may have access to or is leaking information to. So you're sure that your data is staying where you expect it to stay.
[00:33:11] Steve: Yeah. Which is at least a helpful starting point to reduce some of the risk here.
[00:33:18] Simon: Absolutely. Absolutely. And I think it follows a lot of the best practices there. Very similar to kinda like apps on a marketplace, what they have access to, those types of things. Absolutely. And, and we've talked a lot about, um, the security I guess, of how, I, as a consumer of an MCP service wanna make sure that that MCP server.
[00:33:37] Simon: Is, you know, legitimate, doing the right things, those types of things. Um, if we consider now, okay, that's a, that's an absolutely legitimate, uh, MCP server that's running somewhere, somewhere hosted. And how do I make sure that the security of now the person who's actually gonna be using that is actually.
[00:33:58] Simon: Authorized, [00:34:00] to be able to access the data they say they can. What, what are the, how do we, what's the security model behind that? Handling of authorization, authentication, those types of things.
[00:34:11] Steve: Yeah, absolutely. This was, this was kind of one of the larger problems the protocol faced in the earliest days, just because there really was no standard or recommendation on how to do this.
[00:34:24] Steve: So everybody was kind of implementing, you know. Hacks in many ways to get authorization or to get kinda authentication data into this, into the transport in order to send a token or a key or whatever off to the third party service. Now the protocol has adopted two and a particular subset of OAuth, called Dynamic Client Registration.
[00:34:54] Steve: So that you can integrate your MCP server against, you know, otherwise [00:35:00] already standardized implementations of off flows. And this makes it much easier for a user of the MCP server to effectively, like, delegate access to an AI agent or application to use their upstream service like Slack or Salesforce or whatever through
[00:35:23] Steve: a very simple way to authenticate. It's a kind of a one click I just get to choose. I get to click the sign in button and the authentication flow handles the rest. I'm not copying and pasting API keys from some dashboard into the A-J-S-O-N configuration file on my machine or in some client. It just smooths out that use case for the non-technical user.
[00:35:48] Steve: It's not perfect. It's still a challenge to implement dynamic client registration, and there are a bunch of services that help make that easier. But one of the things that you know, we've done with Turbo MCP is decidedly implemented dynamic client registration for the developers so that it doesn't matter how their MCP server implements off, the client can still authenticate the user against their single sign on IDP or Okta or Key Cloak or whatever.
[00:36:21] Steve: And Dynamic client registration only has to be implemented by us. And then the MCP server can connect to whatever upstream using other authentication schemes. And it reduces the amount of complexity and sprawl for the developer who's trying to stand up an MCP server and deploy it into their cloud, to not have to implement, you know, the entire, you know, end to end OAuth flow.
[00:36:50] Steve: Instead, they're able to just get a token from our authorization server and then get off the races.
[00:36:54] Simon: Nice, nice. That allows them to kind of go, make that transition from development to production much, much, much, much easier. Are there any other kinds of advice you'd give for developers who are building MCP clients and they want to take it to that next step of, okay, how do I actually, how do I actually push this so that people will use it or host it for people?
[00:37:15] Simon: What, what would you say is kinda like the biggest gotchas from, from making that transition to, I wanna make this work to, this needs to be a production level grade, MCP service.
[00:37:26] Steve: Yeah, I mean, I think there's, the, the spectrum of production is very wide in my opinion. You've got, you know, open source, you want to support, you know, end users who can just freely install and use an MCP server that still needs to be production, right?
[00:37:43] Steve: Because, you know, they could be using it in a bunch of different contexts. Um, and so, you know, just making sure you're. Transparent with what the implementation is doing, how it's communicating to third parties, what data is, you know, requested of the [00:38:00] upstream service to provide back down to the client, these are a little bit more for trust and transparency.
[00:38:05] Steve: For an enterprise setting, I think you always have to be considerate of the user being non-technical. And so if you're, if you're asking a person who works in the marketing department to generate an API key and, you know, understand scopes, it might be a little bit too much. And so I think about how does this user from an enterprise setting leverage an easy path through an IDP or a single sign on and how do I actually implement that and how do I, you know, integrate into their workflow, is really important.
[00:38:43] Steve: Otherwise, you know, the friction is still a little bit too high for a number of users.
[00:38:49] Simon: No, absolutely. Let's talk about a couple of things that kinda like build on top of MCP servers. And one of the things that I, I really liked, and I saw the demo, on mcp.run [00:39:00] was tasks, the ability to,
[00:39:04] Simon: set out a task that actually goes, you know, goes behind in the background and, uh, and, and performs an action typically through a, a set of MCP servers that it has access to. I love the fact that there's webhook support as well in the, what, what do we see as kind of like, you know, next steps of people building on top of these types of things.
[00:39:25] Simon: I'd love to kinda like hear your take on tasks about how people are, how people are using that today and, and potentially even touching on, you know, whether right now we are very heavily using MCP servers for text, but, you know, can we go beyond text for these types of, uh, interactions? What are your thoughts on, I guess, the evolving way in which we use MCP?
[00:39:46] Steve: Yeah, totally. When we first started MCP Run, you know, largely people were thinking about, okay, how do I just use this one MCP server to automate a, you know, action inside of this software? Mm-hmm. [00:40:00] I really want to just do a quick search. Even at the time, there was no web searching inside these AI chat apps and clients, so an MCP server to enable the model to go out and search the web was kind of a big deal.
[00:40:12] Steve: Then it was clear that like the multi turn chat where. The user would supply a large prompt, and the AI would. Pick apart that prompt and effectively create like a to-do list, multiple steps that it would carry out on its own. And each of those steps could individually connect to an MCP server, which was different from step to step.
[00:40:36] Steve: So one step could be to initially read from Notion to try to find some data about a project. And the second step was then to go and cross reference a term from a notion in a linear issue to look up some concrete work that needed to be done. And then the next step was to, you know, open up the code project on the, on the, the laptop and start implementing that feature.
[00:40:58] Steve: And then once the feature was implemented, the next step would be to update the ticket in linear to move it from in progress to done and all in one prompt. The model was able to discern which tools to call at which time across its available MCP servers. And that was a real light bulb moment that like these are agents.
[00:41:19] Steve: And all it was was a prompt with tools attached. It's not a standalone, you know, artifact. It's not a new code base, it's just a prompt with tools. So what if we could create an environment, a runtime that would host a prompt and allow you to attach tools to it and then trigger it either through schedules or through manual action or through a web hook.
[00:41:45] Steve: That would basically say, okay. You have an end point that allows you to run a prompt and have it act, get access to tools. And that was where tasks were born, was basically kind of like this, non-directed graph, like a workflow builder that was just a prompt and tools and. So, yeah, it kind of competes with, you know, Zapier or innate in from a workflow automation perspective.
[00:42:11] Steve: But it is a dramatic simplification of how to create a workflow, which is just write a nice prompt and give it tools and watch kind of the magic happen behind the scenes.
[00:42:23] Simon: Yeah.
[00:42:23] Steve: Yeah.
[00:42:24] Simon: And I recommend, I recommend having viewers check it out because it's such a slick process. And like you, like you say, like, you know, Zapier, you know there's an amount you, there's an amount you have to do obviously to plug things together and manually wire things together.
[00:42:40] Simon: This is, you know, the task that you create here is very free, free flowing. And, you know, there's a lot of decision making that kind of happens in the background, which is, which is super cool to see.
[00:42:52] Steve: Absolutely. Well, thank you.
[00:42:52] Steve: Yeah. Many other options are still very much in this kind of arrows and boxes era.
[00:42:59] Steve: Right. [00:43:00] You know? Like create a component, drag an arrow to the next thing. If this happens, go to the next box, to the next arrow. And the models are so good now, that these decision making steps or you figuring out how to handle different scenarios based on the return values of a tool call are just kind of inherently part of the model's capability and knowledge.
[00:43:23] Steve: And you don't even need huge foundation models to get this kind of multi-tool call, uh, workflow. Even. We're seeing small language models work really, really well knowing how to use tools and sequence them, in the right order to, to actually conduct a workflow accurately. Your question about different modalities and text kind of being the current, I think there's a lot of room to improve this.
[00:43:48] Steve: And, and largely it comes down to MCP client kind of support and the inconsistency across loss of different MCP clients on how they present. [00:44:00] Other forms of media besides text, the protocol requires that multimedia is encoded into base 64 strings. So right outta the gates, you know, you have an additional step for a client to encode multimedia and then ship it up to a server for it to then be decoded and used in some way.
[00:44:21] Steve: There are ways to get around that, but largely you still have to encode the image or video or gif to some text to be sent over, you know, the, the, the transport. But I think there's some really interesting use cases that could be unlocked if more clients were to implement support for multimedia handling, both on the submission side from the client to the server, but also from the rendering side to take an image back from an MCP server and show it off in the client.
[00:44:57] Steve: There's another project that's unrelated to multimedia necessarily, but is called MCP which allows a client to interpret signals from the MCP server as. Render some novel interface inside of the chat app or whatever the client is hosted in, so the user can have more interesting interactivity besides chat.
[00:45:19] Steve: So instead of just sending a message and getting a message back. I might be able to click a button and get back a list of products. And the list of products is rendered just like it would be on an e-commerce site, uh, with descriptions and prices and purchase buttons. And, you know, an MCP server could implement a full application, that is entirely to implement via a MCP.
[00:45:43] Steve: MCP UI inside of these various different clients. So I think there's, again, lots of room for improvement and innovation, with richer media types, but also as we explore how to render ui, um, dynamically, inside these clients, you'll see really cool use cases emerge that we just kind of can't even imagine right now.
[00:46:03] Simon: Yeah. Very interesting. I guess that kinda like leads me onto the, I guess let's say the final question. What are we talking about? 11 months now since MCP has been released by the time this episode goes out. Wow. Not even a year. And the amount of innovation that, and creation that's happened around the MCP protocol, with, you know, the servers that have been built and, and, and the work that's gone around it has been, has been, has been amazing.
[00:46:34] Simon: The pickup of that has been incredible. What on earth are we, can we expect in another year? What do you do, where do you see the growth of this happening in the AI space? And I guess, you know, will it still be such a big thing in a year's time? Or will the next big thing
[00:46:52] Simon: take all that attention away MCPs.
[00:46:56] Steve: Yeah. I'm not even trying to predict what the [00:47:00] outside of MCP looks like. I think there's still, you know, so much adoption to be made, that we've only kind of seen the very tip of the iceberg when it comes to how MCP impacts, you know, the world. I think that the,
[00:47:17] Steve: A missing piece is to actually have more services and data sources and things that don't necessarily need authentication, like browsing the web. You don't browse the web with identity all the time. You're going to read articles or, you know, get updates about some topic or read someone's blog that you're not signed into a service for necessarily.
[00:47:39] Steve: And MCP clients are currently over configured. You have to be very, very explicit with which MCP server you are using and how it gets, you know, uh, instantiated and what are the tools available in it? And I want to approve this and that, and have lots of control and transparency over it. And that's good.
[00:47:57] Steve: But I think there's just lots of really interesting autonomous capabilities and interactions that we will see once people decide. I'm actually gonna implement my little app, or my blog or my site as an MCP server first. And then MCP clients can start to elect to connect via MCP without this kind of preemptive motion of the user adding an MCP server to their client.
[00:48:26] Steve: Instead, just like you browse the web today, you render a webpage and you click a link.
[00:48:31] Steve: That link is really like what could be an MCP server in a more freeform MCP client. So now as you're just navigating the content that comes back from MCP servers, there could perhaps be an embedded MCP server URL, inside that result and the model could say, Hey, client,
[00:48:48] Steve: connect to that server without the user knowing and automatically start calling the tools and getting more context. And, you know, navigate through a new web that is based [00:49:00] upon MCP entirely.
[00:49:01] Simon: And you've got your own, you've essentially got your own curated web in your new browser. What would be your new gateway into, right
[00:49:10] Simon: into the internet. Yeah. What would the browser for MCP look like? What else needs to be done on
[00:49:16] Simon: the client side to, you know, create that experience? So yeah. Well, one thing's for sure. I agree with you. It's, it, I've given up predicting now it's like, what's the point? What's the point of predicting?
[00:49:27] Simon: Right. But it's, it's, it's so curious to, to, to have that, have that, think about what could be, and I'd love to invite you back in a year's time and we'll see how, we'll see how close we were.
[00:49:40] Steve: Let's do it. In fact, lots has probably already changed just in the time we've been recording the podcast.
[00:49:46] Steve: I gotta go, I gotta figure out what we missed in just the hour or so.
[00:49:47] Simon: I took a, I took a 10 day vacation only about a month and a half ago and Oh, gosh. Never again. It's just too hard. It's too hard to catch up. It's crazy. It's crazy. Yeah. Amazing. Steve, this has been super insightful.
[00:50:00] Simon: Really, really appreciate your, you know, your expertise in, around this space. I'd very much recommend folks go and have a look at, mcp.run and, and Turbo MCP. Super, super interesting technologies. Where can people learn more about that? I just presume they just go straight to mcp.run.
[00:50:20] Steve: You can go to MCP Run or dylipso.ai and get more information about both of these products. Or reach out to me on Twitter and at nil slice,or. Send us an email, just hello@deso.ai.
[00:50:35] Simon: Sounds amazing. So absolutely do that. I think it's a super interesting technology, thanks again Steve.
[00:50:39] Simon: Really, really appreciate the time.
[00:50:41] Steve: Absolutely, Simon. Thanks for having me.
[00:50:42] Simon: No worries. I hope you all enjoyed that and thanks for listening and tune in next time. Bye for now.
Chapters
In this episode
In this episode of AI Native Dev, host Simon Maple and Steve Manuel, founder and CEO of Dylibso, delve into the Model Context Protocol (MCP), touted as the "USB-C for AI." They explore how MCP offers a standardized, model-agnostic interface for connecting AI models to tools, data, and services, enabling developers to build once and run anywhere. Key insights include the architecture's clean separation of responsibilities, the emergence of the Anthropic MCP Registry for better discovery and trust, and Dylibso’s MCP Run providing secure, cost-effective execution for third-party servers.
MCP is emerging as the “USB‑C for AI” — a standardized way to connect models to tools, data, and services. In this episode of AI Native Dev, host Simon Maple talks with Steve Manuel, founder and CEO of Dylibso, about what the Model Context Protocol (MCP) is, how its client–server architecture works, why registries matter for discovery and trust, and how Dylibso’s MCP Run (cp.run) offers a secure, hosted execution environment for third‑party servers. The conversation blends protocol-level clarity with pragmatic guidance for developers building AI-native applications and agent workflows.
Why MCP is the “USB‑C” for AI tools
Developers have long wrestled with divergent tool/function calling conventions across LLMs and agent frameworks. One model wants parameters described one way, another expects a different signature, and return values vary just as widely. MCP fixes this by providing a common, model-agnostic interface that any client can understand and any server can implement. Once an MCP server wraps a dataset, API, or SaaS, it becomes reusable across clients like Claude Desktop, ChatGPT, or bespoke agents — “write once, run anywhere” for AI tools.
The benefit isn’t just conceptual elegance; it’s practical leverage. Teams can build tools once and reuse them across multiple AI experiences, swap models or hosts without rewriting integrations, and compose richer workflows by mixing and matching MCP servers. This standardization compresses integration time, reduces drift across environments, and enables a new layer of portability for AI-native applications.
Steve notes that, from the moment MCP was announced (November 2024), it looked like the plugin system for all AI software — especially if you accept the premise that all software is becoming AI software. Dylibso adopted the protocol early, layering it onto their isolation stack to bridge secure plugin execution with the new MCP ecosystem.
Inside the MCP architecture: host, client, and server
MCP follows a familiar client–server pattern, but with a twist designed for LLM workflows. The “server” side encapsulates access to upstream systems: databases, APIs, SaaS, or custom logic. It’s the adapter that performs actions and fetches context for the model. The “client” sits adjacent to the model inside the host (e.g., Claude Desktop, ChatGPT, or your own agent runtime). The client knows how to discover and connect to servers, list available tools, call those tools with structured parameters, and manage resources.
In practice, your host environment embeds an MCP client that negotiates protocol operations such as listTools, callTool, and resource handling. The server implements the same spec from the other side, exposing a consistent view of its capabilities to any compatible client. That separation allows you to put servers wherever they make sense — local, remote, or hosted — while keeping the model-side logic thin and portable.
For developers, this architecture yields clean responsibilities. If you own a data source or service, you publish an MCP server as the canonical access point. If you’re building an agent or AI feature, you configure your host/client to discover and call the right servers. The result: tooling that’s predictable to integrate, safer to reason about, and simpler to reuse across applications and models.
Discovery and the new Anthropic MCP Registry
Early MCP adopters had to spelunk GitHub to find useful servers, often wiring up local CLIs or executables as ad hoc tools. Discovery and trust were real pain points. That’s changing with the new official Anthropic MCP Registry and the surrounding community effort: a universal index that points to MCP servers hosted across registries, GitHub, and first-party providers.
The registry’s design supports sub-registries — vertical catalogs for domains like marketing, developer tools, or financial services — all federated to a common backbone. Beyond making servers easier to find, the registry aims to improve identity and provenance so developers can verify who’s publishing a server and assess basic trust signals. While quality and security vetting are ongoing challenges, centralizing listings and identity is a major step forward.
Practically, this means developers can discover higher-quality servers faster, evaluate them in a consistent format, and incorporate them into hosts with fewer surprises. If you’re publishing a server, align with the registry’s guidelines: document tool signatures and parameters, clarify authentication, version your API, and establish clear ownership. As first-party providers like PayPal and GitHub publish official MCP endpoints, the registry will increasingly act like DNS for the AI tool layer — a findable, trustworthy directory of capabilities.
MCP Run (cp.run): hosted, secure execution for third‑party servers
Dylibso’s MCP Run (cp.run) moves beyond discovery to safe execution. It hosts user code that implements MCP servers in a highly isolated WebAssembly (Wasm) environment. Instead of spinning up heavy containers or VMs, cp.run executes small, well-contained functions, delivering strong isolation with favorable performance and cost characteristics. For developers evaluating third-party tools (or publishing their own), this reduces the “blast radius” if something goes wrong.
This is valuable because the reality of third-party code is messy: not every server will be first-party, audited, or airtight. By sandboxing execution with Wasm, cp.run mitigates risks that would otherwise demand full code review or dedicated infra. Developers can spin up servers quickly, run tasks against them, and iterate on workflows without committing to long-term hosting or security overhead from day one.
MCP Run also aids experimentation. You can combine servers from the registry, prototype multi-tool agent flows, and harden your setup as you graduate to production. When a server becomes core to your stack, you can migrate it to your own infrastructure or adopt a first-party hosted endpoint. MCP’s standardized interface ensures that move is low-friction: the client/host doesn’t change, only the server’s location and identity.
Security, trust, and enterprise adoption
Security is the number-one concern Simon and Steve highlight. The parallels to open-source supply chain risk are real: unvetted code, dependency vulnerabilities, and malicious packages. The MCP ecosystem is actively addressing this with identity in the registry, stronger sandboxing via Wasm isolation, and the growing trend of first-party MCP endpoints published by the services themselves. Over time, the “trusted-by-default” path will look a lot like HTTP today: use official endpoints for core integrations; reserve third-party servers for specialized needs with appropriate sandboxing and guardrails.
Enterprises are already piloting and adopting MCP across sizes — from startups to large organizations. The early 2025 wave of client support made MCP visible to non-technical users, and that visibility is accelerating demand. Teams are layering MCP into agent frameworks, IDE assistants, and internal AI copilots, typically starting with read-only tools and narrow scopes before expanding to write actions with tighter policies and audit.
Actionably, developers should adopt a security-first posture:
- Treat servers like any external integration: scope tokens, rotate credentials, and use least privilege.
- Prefer first-party MCP endpoints where available; otherwise, sandbox third-party code (e.g., via Wasm) and monitor aggressively.
- Maintain an allowlist of approved servers, pin versions, and capture audit logs of tool calls and resource access.
- Validate inputs/outputs at the client/host boundary to prevent prompt injection or unsafe tool invocation.
- Run staged rollouts with feature flags; measure tool reliability and latency before scaling.
Key Takeaways
- MCP standardizes tool and resource access for AI, delivering “write once, run anywhere” portability across hosts, models, and agents.
- The architecture cleanly separates responsibilities: servers wrap upstream systems; clients live next to the model in the host, handling discovery, tool listing, and calls.
- The Anthropic MCP Registry improves discovery and identity, enabling sub-registries and a more trustworthy ecosystem for server listings.
- Dylibso’s MCP Run (cp.run) provides hosted, Wasm-based isolation for third-party servers, making experimentation safer and cheaper without container/VM overhead.
- Security best practices are essential: prefer first-party endpoints, sandbox third-party code, scope secrets, and log/audit tool calls.
- Enterprise adoption is underway; start with read-only tools, validate performance and reliability, and then expand to write actions with careful policies.
- For developers, the playbook is clear: discover servers via the registry, prototype in a hosted sandbox like cp.run, and harden as you move to production — all without rewriting tools when you change models or hosts.
Resources
Related episodes

Slack Bet
on Context
Engineering
How Slack AI Agents Accelerate Dev Productivity
30 Sept 2025
with Samuel Messing

THE END OF
LEGACY APPS?
Is Your Team Ready for AI-Driven Modernization?
2 Sept 2025
with Birgitta Böckeler

TEST TO APP
IN MINUTES
Can AI Really Build Enterprise-Grade Software?
26 Aug 2025
with Maor Shlomo