
Next-Gen
Dev Tools
Also available on
[00:00:40] Simon: Hello and welcome to another episode of the AI Native Dev. And last week we had a little bit of a 101 with Max on AI agents. And today we're gonna continue that practical theme and continue talking about various agents, how you can use agents, but also how we can extend the capabilities of [00:01:00] agents through MCP servers and not just talk about them.
[00:01:02] Simon: But we're actually gonna see and have a play with some of Alan Pope's favorite MCP servers and servers that he's played with in the past, and talk about the tools that provide the capabilities that those MCP servers offer. So yeah, I mentioned Alan, Mr. Alan Pope. Alan, very, very welcome to the AI Native Dev Podcast. How are you?
[00:01:23] Alan: Thanks very much. Yeah, it's great to be here. I've been watching for a while and now I get to be on it, which is awesome.
[00:01:27] Simon: Long time, long time listener. First time caller, right?
Alan: Yes, exactly.
Simon: So, Alan, tell us a little bit about who Alan was before we met, maybe. And then, and then today.
[00:01:40] Alan: I think I, I get a ribbing from my friends because, uh, I say I'm not a developer.
[00:01:45] Alan: I think I get a ribbing from my friends because I say I'm not a developer. I didn't study software engineering. I've been a dev rel and community manager for a long time, and I can write some code, but I've found in the last few years, all of the different developments with LLMs have [00:02:00] accelerated me from the ideas that I have, turning them into actual usable pieces of software and actually sometimes even publishing them on the internet for people to, you know, critique, review and use and contribute to.
[00:02:16] Alan: And so I think it's been a real accelerant just for me personally using LLMs because I would often find it difficult to get started, not know what tools to use, not know what frameworks to use, not understand everything that I was doing. And this has just made life a little bit happier for me because I can finish things.
[00:02:38] Simon: Yeah. Amazing. And, and when we, when we first met, we, we grabbed a coffee in a, in a, in a local Starbucks, and we just kinda like geeked out on all things specifications. And, and I guess what, what, what was the, what was the drive that kinda made you lean more into specifications? What's [00:03:00] the problem that made you think, do you know what we need to, we need to grow this out into a larger spec versus, versus more of a prompt style approach?
[00:03:08] Alan: Yeah, so I think I'm already, my mind was already in the space of, I want to describe what I want because I felt like if I can describe what I want, at some point I will have the time and capabilities to turn that into code, right? But I, what I need to do first is write it all up in English in a way that I and other people can understand.
[00:03:29] Alan: So I would often run these. I wouldn't call them specs. They were like design documents. And I'd run them past other friends who were software developers and sometimes I'd end up nerd sniping them into doing it for me. And sometimes I'd just get their critique, their review and they'd say, no, that will never work.
[00:03:46] Alan: Or, yes, that's a good idea. Someone should do that. And it would go filed away in the big list of projects that Alan would like to finish one day. And then when the whole spec based development started, like gaining traction, I felt like this was my time because there are these documents that I've written that I fully understand.
[00:04:10] Alan: But communicating that and conveying that into, you know, a software developer's head is hard. But conveying that to an LLM that understands what I want as far as they can understand and can turn that into how to make that a thing that I can run. And with a spec that is more technical and not so much a design document, but a technical specification of, okay, we're going to use Python, we're going to use UV to do dependencies.
[00:04:39] Alan: We're gonna have a, a particular database for the backend and this is what the API should look like. That's the bit that I was missing. And with the specs I can identify best practices and also, identify how I implement those best practices. And so it's kind of, that's the bit that's missing. And it, there was quite a learning exercise in [00:05:00] discovering all of the, the ways in which you use these tools and agents and, and specs.
[00:05:05] Alan: And I think I'm quite keen on sharing what I do because I feel like there's a gap and people don't often know how to get from chatting to an LLM to creating a finished project. And that's, that's sometimes because they just don't know what they don't know. And so I'm, I'm very keen on sharing that kind of information.
[00:05:25] Simon: Interesting. And a lot of what you're saying is really guidance. It's not necessarily just what the thing you want to build is. It's a lot of guidance in terms of how you want the agent to go about building that as well. The, it was funny actually, we, we chatted in the office, the other day and, you, you were, we were talking about tools and tool usage and, and one of the, one of the things that we talked about is how you go about.
[00:05:51] Simon: Finding and identifying those tools. And we're gonna talk about a ton of tools here today. We're gonna talk about, we're gonna talk about Claude, Gemini, CLI, TypingMind. We're gonna maybe talk a little bit about various MCP servers that do things like sequential thinking and, and fire crawl and, and maybe tes YouTube, access and all those types of things.
[00:06:13] Simon: But Alan, how do you kinda identify, you know, where you wanna, grab these tools from? What, what, what's the thing that makes motivates you to try them out and, and, and learn about them?
[00:06:29] Alan: So initially it was, I just needed the thing to get going. And so there, there are a few tools that I've used that got me, get, got me started and now I have an understanding of the landscape. I'm trying to find new things that can make me go faster or replace manual processes. So for example, one of the ones you mentioned is a, a YouTube, sub, sorry, one of the ones you mentioned is a YouTube transcription downloader.
[00:06:59] Alan: So [00:07:00] go and get the transcription of a video because there's a lot of words in there. And you could use those words in the creation of other content like social posts and blog posts and that kind of stuff. And as part of Derel, that's something I would do. Now, I'd already written a little shell script that uses yt-dlp popular like YouTube downloader tool.
[00:07:21] Alan: But that was manual. I had to like run yt-dlp and the URL. Whereas what I was thinking would be better is if there was an MCP that could do that for me while I'm in the flow of creating a blog post and I'm in the zone and I don't have to jump out to use some terribly written shell script from me or, use some third party tool.
[00:07:41] Alan: I can have it all integrated. And so having a conversation with an LLM where it can branch off and go, okay, I'll now I'll go and get those transcriptions and then now I'll do the summary and then let's figure out what keywords people are looking for and do all those things, but outsource them to the [00:08:00] tools that can do those things best.
[00:08:01] Alan: That's, that's the, the part that I find interesting the. The tricky part is where do you discover these things? And sometimes it, it's simply word of mouth. I sit having coffee with a friend and they tell me the things they've discovered over the weekend, and that's not scalable. It's great for me and great for him, but it's not scalable.
[00:08:19] Alan: And I think we do need better ways to share, the tools and settings even like, the, the little configuration settings for each of your agents, those kind of things. Unless you sit there and read through all of the documentation about everything, you know, and ain't nobody got time for that. So I'd way rather have recommendations from other people and rated, tools and MCP service that, that I can make a qualitative decision about whether I'm going to use that or not.
[00:08:50] Alan: Yeah.
[00:08:50] Simon: Yeah. Cool. Let's talk about agents a little bit now and jump into, I guess which agents you had potentially talked about, word of mouth and, [00:09:00] and, and recommending these agents. For someone who's never really used, let's say, a coding agent before, what would you. That we have to first, I guess, recognize there are different styles of agents and an agent is just something that can go away and do something very autonomously.
[00:09:17] Simon: Perform various tasks, maybe, maybe even call out to sub-agents and other agents to perform those tasks. It doesn't necessarily say anything about the UI as to how you interact with that. Very often chat based, sometimes in a UI that is web oriented, sometimes others where, where would, how would you recommend to people to try and use an AI agent first?
[00:09:41] Simon: Is it up to them? Is it the best tool in the market?
[00:09:47] Alan: You know, this is one of those tricky things where it's changing all the time. And I know a lot of people just use ChatGPT, and they just chat to it in a webpage or in the app, and they'll ask it to create something and it will do that.
[00:10:02] Alan: And certainly over time, all of the agents that are available, the popular ones have improved dramatically. It used to be you'd get into an angry argument with an agent trying to get them to do what you want and get them to understand what you want. And eventually you'd fill out their context window and it would forget what you originally asked them for.
[00:10:24] Alan: And things have got a lot better now where many of the web-based agents have code editors built in, and some of them have got a place where you can run them, test the code directly in the webpage. And so my experience from early use of ChatGPT to write code or early use of Claude in a webpage to use code is probably very different now because people could just open a webpage and they've got a code editor right there.
[00:10:51] Alan: So I think my experience might be colored by the fact that I've been doing this for a while, and by a while, I mean like a year or more. That's still not very long. So I think if you were already using a web tool like Claude or ChatGPT, looking back at the chat, I think one of the ones I would look at is TypingMind.
[00:11:17] Alan: Now, TypingMind is interesting because it's the same web-type UI, but it's super configurable in that you can connect it to external tools to do things like download the YouTube thumbnails or the transcription or get it to go and search online for more information or connect directly to GitHub or whatever the things you need to do as part of your development to do research and software development.
[00:11:46] Alan: But the nice thing is it has this web UI, but it is super configurable. And so if you look at the example I've got here, you can create these things called agents. Now these are, you know, when you think of the term agent, it's a bit of an overloaded term, and in here it means like a personality of an agent.
[00:12:06] Alan: So it's prepared with a prompt and it may have access to some knowledge that you've uploaded. So for example, here's a very simple example that I've used, this one here called Write like Pope. Now all I've done is in TypingMind, I've updated a knowledge base, which is over here on the left-hand side, and in the knowledge base, you can upload documents that your agents in TypingMind will have access to.
[00:12:31] Alan: And so down here I've got one called Pope Blogs, and I've had a blog since about 1997 or something. And I grabbed all the source for all of the articles that I've ever written and put them in here so that I can have TypingMind help me write an article that sounds like I wrote it.
[00:12:52] Alan: Now I realize that sounds like I'm just not ever gonna write any blogs again because I'm just gonna leave it up to TypingMind to do it. But I did it as a test. I haven't actually used it an awful lot. I've used it a couple of times, but it's quite remarkable. Once it knows your style of writing and your tone of voice and the way that you write, it's very able to create new content that looks like you wrote it.
[00:13:16] Alan: It's quite surprising. I can give you a little demo. If I go back to the chat and I say, write like Pope, this has a prompt in it that says, write a blog post the way Alan would write. And down here you can give it access to those MCP servers that we'll talk about a little bit more later.
[00:13:40] Alan: But you can also give it access to the knowledge base. And if I say, write a blog about new developments of MCP servers in the command line. So the other thing you'll notice while it's chugging away, you'll see down here it says Claude Sonnet. One of the nice things about TypingMind is you're not locked to one of the providers.
[00:14:19] Alan: I could start a chat and say, actually, I want to do this with Gemini, or I want to do this with ChatGPT. And you can even fork conversations partway through and change from one model to another. So this is quite an advantage over just sitting in the ChatGPT web UI or the Claude web UI because Claude doesn't give you access to ChatGPT and vice versa.
[00:14:42] Alan: And so there are some interesting options in here. Now you'll notice here it's gone and found a blog post that I wrote some time ago called Command Line Only Laptop, which is a ThinkPad that I had a while ago. And then it reads a few more things, some other articles that I've written, and down here.
[00:15:03] Alan: I mean, this is exactly the style of content I would write. I'd put a TL;DR at the top, and then I'd write far too many words. And so it has got that style exactly right. And, you know, subheadings like what on earth is MCP, that kind of stuff. That's the style of writing that I have. So the thing I like about TypingMind is it's doing a lot of things all within one user interface.
[00:15:26] Alan: It's able to connect to different tools. It's able to have knowledge uploaded to it, not just blog posts, like documentation or existing code. And I can access this from anywhere because I've signed up, and I can access this on my mobile phone and have a conversation and actually start a conversation and some software being developed and then walk away and continue somewhere else.
[00:15:51] Alan: One of the other things that it has, which is really cool, is one of the tools that it has access to is called File System. And this is handy because I'm on a webpage, but I can give this tool access to specific directories on my machine and say, write the code directly on my machine. And that gets away from what we used to do, which is have a conversation with an LLM and copy and paste the code out of the conversation.
[00:16:18] Alan: With this, it just goes directly to that folder and will start creating files. And when you go to that folder, you'll see the software we created. And then later on I could use the GitHub MCP to have it actually create a repository on GitHub and upload the whole thing and upload the readme and even make the readme sound like me and in the style of my other readmes.
[00:16:40] Alan: So I find TypingMind a good tool, a good agent for doing a lot of different things in one place without you having to switch between all kinds of different tools and utilities. And if you prefer the web UI, this is great for that.
[00:16:56] Simon: It's amazing because it feels quite generic in the sense of it's not necessarily specifically for coding or specifically for writing blogs, but I guess that's the power of MCP servers to be able to connect you with those specific tasks that you might want the agent to branch out and call an MCP directly for.
[00:17:16] Simon: I'd love to maybe do some testing here, Alan, see if you could write 10 different blogs or so, and five of which you wrote manually, five of which you use this for, and we can see maybe if folks who know your style of writing can guess which ones are from you and which ones are from AI. Do you have to do much editing on these blogs?
[00:17:41] Alan: I will always edit because they sometimes still do get a little bit excitable and use phrases that I would never use or use terms or superlatives that I would never use, but it's surprisingly good in terms of is it convincing. I also used this to create webinar abstracts in a past job.
[00:18:06] Alan: And the way I did that was I looked at the potential person we were gonna talk to and I scraped their LinkedIn of every post they've made recently, their social media. I then went to the open source project that they maintain and I scraped the last five meeting notes for their open source project that they talk about.
[00:18:30] Alan: And I put that all in there and I said, give me a webinar synopsis that this person could talk authoritatively on topics that they clearly like, and then gave them the webinar synopsis format that we use in the company. And it came up with an abstract that was pretty much spot on, and I sent it to my boss, and they were like, yep, that's great.
[00:18:54] Alan: That's exactly what we're after. And I sent it to the person I was gonna be discussing this with, and they said, yep, I can talk about all of that. Brilliant. Well, there we go.
[00:19:01] Simon: We totally should have done that for this podcast, Alan. Let's switch. Maybe we did. Yeah. Maybe. Maybe. Who knows? We'll never know.
[00:19:10] Simon: Let's talk a little bit about a more specific tool now because like we said, this can be used for many, many things, including those synopses. And I guess you could use something like Claude Code for similar things, and it probably won't complain. It'll do what it's told or what it's asked for, but Claude Code is much, much more of a developer's tool in the sense of if someone wanted to make changes to code, create new projects, Claude Code could be a great starting point. Talk us through terminal AI agents, and I guess this really only appeared earlier this year, I suppose, right? Are you a fan of the, first of all, what is a terminal UI and are you a fan of it? [00:20:00]
[00:20:00] Alan: Good lord, yes.
[00:20:01] Alan: So my, when Claude Code first came out, I was using VS Code and GitHub Copilot, so a familiar graphical IDE that everyone's seen.
[00:20:15] Alan: And down the side you have a chat with an agent copilot, and it helps you update or create projects directly in the IDE. With Claude, it's a little different. But it depends how you use it. Like you can just jump into a terminal and we could create a project in AIND and call it, I don't know, project five.
[00:20:45] Alan: This is my fifth project that I'm doing today, I guess, and just launch Claude and ask it to create something. Like there's no IDE here. I'm not looking at any code. I'm not within some other tool. I am directly talking Claude here. And at this point I can ask it, you know, create a,
[00:21:20] Alan: right now we are all crikey. That's a APLI error. That happens. So luckily it recovered. So I just guessed what to do there. This is not a prepared demo. I literally just said create a thing in a language that I don't know to do a thing that I would like to achieve. Right. I quite like having RSS readers. Now under the covers, obviously this is gonna be creating code in this folder and that will appear at some point. It's going to create files and I don't know, maybe I do or don't have cargo installed, or I do or don't have all the relevant bits, but it's gonna start munching away on the knowledge that it has and whatever it's found via whatever tools it has access to, and it's gonna start creating some code.
[00:22:11] Alan: Whether it works or not, I don't know, but I've not had to give it an awful lot of preparation. I literally, that one line is all I told it, and I haven't done any prep for this. That was just a spur-of-the-moment suggestion, and that's quite freeing. Like not having this entire IDE in front of you, but just having a chat with a thing that may or may not be able to create what you envisage.
[00:22:38] Alan: Right now what I'm envisaging is, you know, a list of feeds and I click on a feed and I see the most recent articles and I click on an article and it expands them. I don't know if that's really what this is gonna create. That's what I have in my mind and I think, dare we say it, that's the whole vibe coding of a few months ago was very much like just blurt out one line and just say, yep, let's go do it, do it, do it, and away it goes and creates a thing or doesn't. And it may work or it may not. I don't feel I want to do that. I want to create things where I can clearly articulate what it is I want. Like I don't think it's enough to say, create an RSS reader in Rust.
[00:23:28] Simon: And what kind of Claude Code developer are you, or what kind of agent developer are you? Are you the kind of person who short prompts and then iterates to try and say, actually, I would rather this than the way you did it? Or I wanna, you know, tighten up these requirements over here?
[00:23:46] Simon: Or do you actually long prompt to start off with, whereby that first prompt is quite detailed in terms of what the style of the app is, maybe what stack you want it to be run on, and a number of capabilities? Which do you naturally fit into?
[00:24:01] Alan: So I think I've evolved from, I think what a lot of people have done is they've evolved from that short prompt to the longer prompt, simply because it can get quite exhausting when you have an idea of what you want, but you didn’t articulate it fully.
[00:24:23] Alan: And so the agent tries the best it can with the information it's given. And so what I now tend to do is be very clear, very specific, almost too verbose because I really want to steer the agent from the start. And what I will often do is set the permissions. So these little prompts that pop up that you're seeing now where it's asking, should I app to install this thing or should I modify these files in the current directory?
[00:24:56] Alan: I will often say yes to all of those or configure in my Claude settings to allow it to edit in that folder because I actually want to give it a quite verbose prompt. And then I want it to go away and leave me alone for a while so that when I come back, there’s something useful for me to see. And I don’t come back to the screen and see it’s waiting, asking me a silly question, right?
[00:25:20] Alan: I want it to just put as much effort in first and then ask me. And in fact, I’ve got a setting on my Claude, because I quite like multitasking. I’ll have Claude running maybe while I’m watching a film or maybe while I’m making dinner for the family, and I’ll have an idea and I think, oh, let’s see if Claude could do this.
[00:25:39] Alan: And I’ll tap as much as I possibly can, and I have a Claude setting that will say out loud when it’s waiting for my input. And so it’ll say, Claude is waiting for input. And so if I’ve got the speaker turned up or if I’ve got my headphones in, then it will let me know that it’s ready 'cause it’s dead time.
[00:25:55] Alan: If it’s prompted something and I haven’t noticed, and it was an hour ago, then it’s a whole wasted time. Whereas if I had it tell me it’s waiting, come on Alan, answer this question, tell me what I should do right and tell me what I should do next. So yeah, I use lots of words to try and coerce it in the right direction.
[00:26:14] Simon: I’m glad you mentioned headphones there 'cause I can just imagine the family going, oh dad, we’re trying to watch a film.
[00:26:20] Alan: Do you know what’s even worse? It’s my voice that it does it in. I use a tool called Piper on Linux, and many years ago I donated my voice to an open source project. And because my name is Alan, my voice model appears near the top in Hugging Face.
[00:26:39] Alan: And so people download and use my voice for stuff, and I use it as well. It’s quite funny having my own voice coming out of the computer saying, Claude is waiting for prompt.
[00:26:47] Simon: So for people who have used that in the past, this could be quite a strange episode for them who are listening to you directly as well, right?
[00:26:55] Simon: They’re thinking there, their kernels. Hello again? Yeah. Awesome. So yeah, great. And Claude is your go-to tool these days.
[00:27:06] Alan: So yeah, I think I’d say Claude is the primary one I jump to. And I tend to use Gemini if I’ve run out of tokens in Claude. So if I don’t really want to keep throwing money at a project, and so if it’s not useful for me, and you’ve got to be tight on the budget, sometimes I’ll just switch to Gemini because I have a Gemini account, right. And it’s very similar, but Gemini seems to be a little bit more constrained on how you can prompt it. And so I don’t feel as free with Gemini, even though it has a much higher context window, larger context window.
[00:27:46] Alan: I don’t feel as free because I feel like it sometimes says, no, I’m not going to do that. And certainly I see that in TypingMind. The prompts that you give to Claude will work, but the same prompt you give to Gemini and it won’t. The one other command line one that I’ve used recently is AMP, which has an interesting model where they have a free ad-supported tier.
[00:28:09] Alan: And I gave that a test this week and actually asked it to create a web-based RSS reader. And it did, and I was quite surprised how good it was. And I didn’t give it any tools. There were no MCPs connected to it. I just used pure Amp on its own and it was very much like Claude, but it, yeah, it...
[00:28:32] Simon: And AMP is the amp is the agent, coding agent for from Sourcegraph.
[00:28:37] Simon: And AMP is the agent coding agent from Sourcegraph, I believe, right? I think I’m making a huge assumption that we should check afterwards. Yeah, I’m pretty sure Amp is from Sourcegraph, the creators of Cody as well, which is nice. So should we jump over to MCP servers?
[00:28:53] Simon: So MCP servers, there are registries after registries. In fact, there are registries of registries, I think, that allow you to track all the different MCP servers. Actually, we had a great session around MCP.run, which is a super interesting, nice SaaS version that can host MCP servers and things like that as well.
[00:29:16] Simon: But when I remember when I said to you kind of, you know, what are your favorite MCP servers, what are the MCP servers that people mostly use? Tell us how you discovered what MCP servers people were mostly using.
[00:29:30] Alan: Ah, that’s a great point actually. So I have my own, but it turns out other people have preferences and I could go and ask everyone on the internet or put a poll on social media and say, hey, what are your favorite MCPs?
[00:29:43] Alan: But no, that’s far too easy. And I wrote a script or I asked Claude to help me write a script which basically scraped GitHub looking for the configuration files that people list the MCPs in. So MCP.json is a common one that a lot of them use, and it’s a JSON file that lists in a structured way all the MCPs.
[00:30:08] Alan: And so you can see in that first slide we showed a list of some of the MCPs I’ve used. I mentioned a couple of them, but I wondered what other people use. And I got it to write a report, and this is that report. Now, bear in mind, this is scraping GitHub. Not everyone is going to add their MCP.json to their project.
[00:30:33] Alan: And some people have private projects and some people use things other than GitHub, like GitLab or other code hosting tools, and some people will have it totally in-house. So this isn’t an exhaustive list of every MCP that everyone’s using. I tried to get it to look at the different types of setting files and configuration files for different IDEs like Cursor and VS Code, and it found quite a lot and it took quite a while to scan through all of these.
[00:30:59] Alan: And some of the data that it pulled out were which models are people using. It’s not perfect, this is alpha-quality code, but it was interesting to see which models people were using. It also pulled out some sample repositories and some of them have rather enormous settings files.
[00:31:22] Alan: And I thought, why is that? Why would people have such enormous settings files? And you can look at them because these are public repos. And so I went and had a look at these settings, and some of them have just got no wildcards. It’s like, I can edit this file and this file and this file and this file.
[00:31:37] Alan: So some of them probably need a bit of optimizing. I don’t know, maybe there’s an opportunity there for a tool to optimize someone’s Claude settings, and I could file an issue on their GitHub. But yeah, there are all these different settings that people put in their tools.
[00:31:55] Alan: And somewhere down here I’ve got some MCPs, and somewhere down here, have I got a list of MCPs that people use, sample repos?
[00:33:12] Alan: So in these reports, we found a whole load of configuration files. It gave me some repositories that have configuration files in them, variables, and here we go. There’s a list of some of the tools that people had in their configuration files. Now, the thing that I found surprising about this was that the same few tools pop up at the top of the list quite a lot.
[00:33:38] Alan: And I think this is part of what I was talking about earlier. People learn from others, and they copy and paste, and it’s almost like a cargo cult of copying and pasting stuff from one place to another. The other thing is there are repositories that are collections of other people’s settings files so that people can learn.
[00:34:00] Alan: So it is a bit wild west and there’s a bit of duplication there. But you can see here there are some of the ones that I’ve already mentioned, like file system, being able to access the file system and sequential thinking and memory. And yeah, there’s a bunch of interesting ones like Brave Search.
[00:34:20] Alan: So being able to use Google to search the web is one option, but there are other MCPs to be able to search the web with Kagi or Brave or I think there’s probably a Bing one as well. I haven’t looked, but these are some of the ones that people are clearly using because they’re showing up in their settings file.
[00:34:41] Alan: I’d be interested to dig deeper into which ones people are actually using and which ones are in repos that maybe we can’t see. And I think there is an opportunity there to surface this information about MCPs that people are using because it is a bit unknown and a bit daunting for someone new to know which ones to pick.
[00:35:04] Simon: I found that really interesting, not just to see which ones exist in configuration, but to see which ones are actively being used in sessions to see if there’s that data. That would be interesting actually. I wonder if MCP.run share any of that data about usage and things like that. That would be super beneficial.
[00:35:21] Simon: And I think that really resonates with me, the fact that people are trying to learn the best practices for themselves, probably way too much in isolation, and there’s a real lack of the ability to share that, I guess, in a way that makes it easily discoverable.
[00:35:41] Simon: I suspect a lot of people are probably trying to roll their own here, particularly within organizations. And it’s much harder obviously to add those policies outside of an organization to say to people, hey, here are some great Cursor rules or any kind of configuration for an agent or something like that.
[00:36:00] Alan: Yeah. And you also have to be really careful because sometimes people are putting secrets in their MCP configuration. And so they actively don’t want to share that, not because they don’t want people to know which MCP tools they’re using, but they don’t want to accidentally share their GitHub personal access token.
[00:36:19] Alan: You know, some other more important API token. And so there is this feeling that I shouldn't share this, but I feel like there is a space where we could have a database of MCPs that are rated both by popularity, but also rated by the quality of them and whether they get software updates and whether they're still maintained.
[00:36:43] Alan: Because it's, you know, sometimes you configure an MCP in your agent and then it goes away because they've moved the repository or they're bought by somebody else or they've just given up. And so it can be a bit dangerous having this tool hanging around in your MCP Adjacent file that you think, well, this is giving me value.
[00:37:06] Alan: Is it still giving you value? It could be that that's actually a liability and no longer a benefit to your project. And so there does need to be some auditing of these tools, and I know there are some organizations already doing this to make sure that people aren't getting themselves in trouble using tools that are no longer maintained, insecure and so on.
[00:37:28] Simon: Yeah. Should we jump into some demos? Should we see what these MCP servers are all about?
[00:37:34] Alan: Yeah, so one of the ones that I use, well a fair amount, is Fire Crawl and the YouTube transcription one. So I think if I jump out of this, I think I might have it configured in another folder somewhere. We'll say goodbye to that RSS reader.
[00:37:52] Alan: I assume there is some code in there. Okay. And I think I have one with somewhere in here. Let me just find.
[00:38:15] Alan: Oh no, I don't wanna look in there.
[00:38:28] Alan: Slides. Okay. Let me just move that, sorry. I should have made this.
[00:38:50] Alan: Okay, let's do a quick demo. I'll use Claude. And in here it's found that I have Ancp do JSON file and it's got the MCP service that I've listed previously. And if I just press enter, it should enable all of them. It would tell me if there was a problem enabling them. And if I do slash MCP in Claude, it should show that they're all connected successfully.
[00:39:16] Alan: So that should work. So what I want to do is use the YouTube transcription MCP to get the text from a YouTube video, and try and turn that into either a bullet list of interesting topics that were in there to put in the description. Or maybe I could turn that into a social media post, or maybe I could turn it into a blog post or maybe all three.
[00:39:41] Alan: Let's see what we can do. So somewhere I should have a video that might be familiar to you. This one here. And if I go back here and say, grab the transcript or the link video and craft list of a, well, let's just say a description of YouTube, some social posts to promote it.
[00:40:19] Alan: And a deep dive blog post about the topics.
[00:40:31] Alan: I honestly don't make this many typos. And so we'll just give it the video. Let's just see what it does.
[00:42:15] Alan: Let's do a demo using Claude. So if I open Claude, it will see that there are some MCP servers configured in the JSON. And if I then do slash mcp, we should be able to see the status that all of them are connected and working successfully. We probably won't need to use all of them for this little quick demo.
[00:42:40] Alan: All we're gonna do is say, we've got a YouTube video over here, one that you may have seen before. And I want to create a YouTube description, maybe some social posts and a blog post. Let's see if it'll do all of those. Grab the transcript of the attached YouTube video and create a description.
[00:43:08] Alan: Some social posts and an in depth book post about the topics covered.
[00:43:23] Alan: Let's see how this does.
[00:43:27] Simon: And we could do this for this episode as well and share that. That would be kind of fun. Yeah. So one of the things that Claude loves doing is creating these plans, right? These set of to-dos plans out what he wants to do, and then goes through them.
[00:43:44] Alan: Yeah, I find, ah, now you'll see there, it's immediately said it's gonna use a tool. So it's figured out that YTT is the one it wants to use and I'm happy for it to use YTT. So yeah, it creates the plan and then works through it. Incidentally, I mentioned Amp earlier as another agent. It does a very similar thing, but the UI is different and I really like it.
[00:44:03] Alan: It has a box on the right-hand side and it keeps the to-do list on screen, whereas I often find with Claude, it all scrolls off and I'm not entirely sure what it's up to anytime. I'm sure I could bring it back or just scroll back. But yeah, I was quite pleased to see that Amp keeps it right there on screen so I can see it at all times and see what progress it's making.
[00:44:29] Alan: Wow. It is actually doing it.
[00:44:33] Simon: I did Google Amp actually while we were waiting and yes, it is a Sourcegraph tool.
[00:44:38] Alan: Nice one.
[00:44:39] Simon: I was worried once I said it, you can't get back from it. Right. It's on the internet. Then they'd have to just acquire them, if not. Yeah.
[00:44:46] Alan: Well it's also good to be wrong sometimes on the internet.
[00:44:48] Alan: It is. People will correct you. Yes, yes. There's no lack of that. And it shows you're human and not a robot. Yes. Yes. So yeah, this will probably take a little bit, but it shouldn't take too long. We could probably speed up a little bit.
[00:45:08] Simon: Yeah. Let's fast forward this bit.
[00:45:23] Alan: But maybe it's going too in depth.
[00:45:32] Claude voice: Claude is waiting for human response.
[00:45:35] Alan: Did you hear that? I heard that. That was incredible. That was me. So he, I couldn't tell that it was you, you, or recorded you. That's, that was weird. It clearly works then. So I dunno if he's actually saved it somewhere in the current directory.
[00:46:10] Simon: That's the interesting thing with something like this, it provides the output directly into the terminal, right. But it doesn't necessarily, unless you explicitly ask it to, save anything to disk or it knows it needs to perhaps for a coding file or if it wants to make a change to a coding file or something like that.
[00:46:32] Alan: Yeah, and that's one reason why I really quite like using TypingMind because what TypingMind will do is while you're in the webpage having a session, anything that it creates, there's a copy button just right there on the screen, just like with ChatGPT and all the other web-based ones.
[00:46:50] Alan: But one of the nice things is you can copy it with formatting and paste it straight into a Google Doc and it will come out formatted well for editing in a Google Doc, which is pretty awesome. Right? So it thinks it's created some files. So now we have “is waiting for human response.” I got interrupted by myself there, which is, yeah, not uncommon.
[00:47:13] Alan: So let's look at the video description. It's got the name of the person. Does this match the conversation that you had in that? Yeah, I think so. Pretty good. Yeah, pretty well. This was actually Guy, this was Geico, of course. Oh, I didn't see who presented that one, sorry. So then the social posts are a bit tricky because social posts with all of these AI tools often like smack, are being written by AI tools.
[00:47:45] Alan: And so they really need a bit of human interaction to do a bit of finessing of these. But it gives you some ideas. It's good for ideation and summarizing what's going on. And then the blog post, let's have a look at that.
[00:48:04] Alan: I mean, it's. It's comprehensive. If, if nothing else, there's a lot there. So,
[00:48:10] Simon: It's a good starting point, right? Even if nothing else, that to avoid that writer's block, to have a whole bunch of thoughtful areas,
[00:48:19] Alan: And to be fair, I did yolo it and say, write the blog post. What I should have done is say, give me some topics, some things, and then we could have worked on it together, iterated. But for a quick demo, I think it illustrates that. Like, I didn't have to manually go and get the transcript for that video. And I realize that's a very simple example, but there's so many of these that when you hook them all together and you have a whole bunch of tools working together, orchestrated by your agent, it dramatically accelerates what you can do because you're not stuck having to go and manually do things along the way.
[00:48:52] Alan: And that's the thing that I find is the thing that accelerates everything for me. Yeah. Yeah. [00:49:00] Should we do another, another MCP demo? Yeah, this one will be a trickier one. I actually wrote an MCP server, which is the one at the bottom of this Grype. So when I worked at an, Grype is a vulnerability scanning tool.
[00:49:19] Alan: And I wanted to have an MCP server that would scan the code in the repository that I'm sat in and make sure it didn't have any vulnerabilities. Now I haven't tested this recently, so I dunno if this is gonna work. Well, we'll give it a try and see. What might be good here is if I let me set this demo up.
[00:49:59] Alan: Right. So what I'll do, right, so for this demo, I'll create a little piece of software and then I'll see if I can get it to do a vulnerability scan. And again, I'll use Claude. And this is a fresh folder, so it's asking me again if I want all those CPs. Now what I'll do is I'll try and trigger it by making it build something that will be using out of date software and by forcing it to use out of date software, maybe we'll find some vulnerabilities, assuming my MCP works.
[00:50:29] Alan: So create a Hello World DJ app, crikey, app using what's the oldest version of Django that we think it will be able to do. Oh my gosh.
[00:50:45] Simon: Don't ask me for Python advice.
[00:50:47] Alan: Using Django 4.0. There we go. I think that's old. That sounds old. Using Django 4.0 and use UV for dependencies..[00:51:00]
[00:51:04] Alan: So all this is gonna do is create a little bit of code. It probably won't do an awful lot. It'll create a virtual environment using UV and then install whatever Python packages are required to get this project bootstrapped.
[00:51:24] Simon: And then once for those who don’t know what UV is.
[00:51:26] Alan: So a lot of people use Python virtual environments so they can have isolated directories for each of their projects. And most of the time people use Python VMs. And there are other ways to do this. UV is a relatively new tool written in Rust, and it's a lot faster than using Python virtual environments. So instead of doing Python PIP install something, you do UV PIP install something. And actually it's quite common to see MCP servers that are delivered using UVX.
[00:52:07] Alan: And so it's the equivalent of node packages that use NPX use UVX. And in fact, that's how the Grype MCP server is delivered. It's written in Python and all you put in your MCP JSON is UVX Gripe MCP, and it will go and find it. And it's just a really nice way to contain all of the dependencies for your Python project in one directory.
[00:52:38] Simon: Nice. So it's initialized it and now it's creating the project structure, the Django project structure.
[00:52:45] Alan: I envisaged Hello World being quite quick to create.
[00:52:56] Alan: I find it interesting that sometimes when Claude has difficulty writing files and then it just tries again and does it, I'm not entirely sure how it manages this, but.
[00:53:24] Simon: So yeah, we could fast forward through some of this.
Alan: Yeah, absolutely. We'll, we'll speed. We'll 2/3x a few of the visuals.
[00:53:58] Claude voice: Claude is waiting [00:54:00] for human response.
[00:54:02] Alan: Yeah. I'm never gonna get tired of that. You say that, you say that it's only been three times. Yeah. Alright, I'll mute that. So it's created a little project. Let's see if we can use the Gripe MCP to scan for vulnerabilities. So scan the project for on now.
[00:54:27] Alan: What might be interesting here is I don't actually have Grype installed, I don't think. I might, but one of the things my MCP does is it goes to find the binary to see if you've got it. And if it can find Grype in the path, then it will use it. If it can't find it, it will go and get the binary.
[00:54:45] Alan: And if Grype is outdated, it will try and update it. Once it’s found the binary and it’s able to run it, it then has to download a vulnerability database. Every time you run Grype for the first time, it pulls down a vulnerability database, which is like a hundred meg or something like that. So I suspect it’s probably doing that right now.
[00:55:08] Alan: It will then scan the directory. So now it wants to scan this directory, and I'll say, yep, scan the directory. And what it should do is potentially find maybe some vulnerabilities in whatever dependencies have been added to this Python project. And what I would envisage doing is, rather than doing this manually, I would probably put it in my agents.md or Claude MD to say, scan for vulnerabilities before committing any code or scan for vulnerabilities before uploading any code to GitHub, because I don't really want to publish it, then scan, or just test that it works.
[00:55:46] Alan: And then scan. I want to make vulnerability detection something that's just part of the process. And by having a vulnerability scanner, whether it's Gripe or any other as an MCP server, it means it's right there as part of my developer workflow. It's not an afterthought. It's not something I do later.
[00:56:04] Alan: It's something that I can integrate right into the process.
[00:56:09] Simon: And of course, there's more you could put into there as well — not just the process, which is super important, but also the policies to say, like, you know, in the two high vulnerabilities that have come back here, we don't want any high or critical vulnerabilities.
[00:56:23] Simon: Medium and low is okay maybe for a specific policy, and you can almost ask it to not check in anything that has high security vulnerabilities unless the user absolutely says so, and so forth.
[00:56:37] Alan: Yeah, and this hasn't given me a super detailed explanation of what the problem is. I can certainly go and Google that, but if I wanted to ask for more detail, it now has the names of the vulnerabilities, and so it could describe them to me and give me the detail and say, here's what you should change.
[00:56:53] Alan: And that actually can be fed to Claude by agent or whichever agent I'm using, and the agent can then fix that. So it says, I recommend you do this thing here. And I say, okay, do it, and away it goes and fixes my code. So I've found the vulnerability, and potentially I can have Claude just fix it for me.
[00:57:14] Alan: Because I made a silly mistake and I said, use Django 4.0. And what I actually meant was Django 4.2.
[00:57:21] Simon: Awesome. So what's next then? I think lots of people are now using MCPs. What would you say is one of the next big things that we should be thinking about?
[00:57:38] Alan: So, I mean, it's predictable, but I'm gonna have to say it's spec-driven. It's the process of creating new software. Like for someone like me, where I can articulate in English what I want, but I don't necessarily know the details of how every single one of the dependencies gets built, and trying to articulate to the agent how to do those things, then I'm kind of wasting time.
[00:58:03] Alan: Someone else must have done this already. Someone else must have built this library or used this API and so I feel like there should be this re hub. I feel like there is a real opportunity for those little snippets of instructions, those specs, that we can scatter throughout our project and have that do the work.
[00:58:29] Alan: So I want to have something where there is this repository and that’s, you know, the Tessl MCP and the spec registry, that the agent can call upon to build software in a way that is secure, efficient, reliable, and reproducible..[00:59:00]
[00:59:15] Simon: Yeah, absolutely. And I think the fact that the, the, the sharing capability here within the spec registry, I think is very significant because it really starts providing that collaborative approach too.
[00:59:33] Alan: Yeah, and it also, a lot of patience.
[00:59:36] Simon: Work as well as the guidance. It's very hard out a guidance project for those policies. I don't have to tell every time for Python. Sometimes it goes and gets specs for those. You might choose to override certain settings, frameworks, or if you start teams. You don't want everyone to develop in different ways. You agree on a way, stack, agree on that everyone abides by, and this is where I think having a platform, having that.
[01:00:07] Simon: That space in which people, it's something that's coming back to those people who are creating these agents themselves. And I feel like steering agents, policies, workflow.
[01:00:18] Alan: Yeah. We don't want everyone to have to yolo their way through and try and corral and course correct constantly with their agent. And not have to keep, yeah, well, I did ask for that earlier, and you didn't do it, and I've already told you do this, and you didn't do it.
[01:00:36] Alan: And so I feel like these specs can keep it on track, keep it on differently, and all gone wrong. I prefer that consistent approach rather than just having an argument with an agent. It feels like a bad use of my day.[01:01:00]
[01:01:02] Alan: Oh my gosh.
[01:01:29] Alan: Thanks so much. I look forward to more of these MCPs appearing and people sharing what they're doing.
[01:01:54] Alan: I'll make sure it happens. Don't worry.[01:02:00]
[01:02:12] Simon: Yeah, Tessl. If folks wanted to take a look at the spec registry and how that can be achieved, please do check that out. Alan, we're at time already. I really appreciate all the wonderful demos. I think it would be great for people who are just getting started with MCP, just getting started with agents, to see what's available, learn what others are using, and how easy it actually is to get started and get enabled with MCPs.
[01:02:44] Simon: And how it truly extends the capabilities of an agent. So thank you very much for showing that.[01:03:00]
[01:03:00] Simon: Yeah. And I hope the biggest takeaway isn't that we should all have this nice little audio announcement saying that Claude is waiting for human interaction. That shouldn't be something everyone does in an office. I can only imagine in an open-plan office how frustrating that would get.
[01:03:21] Simon: Awesome. Alan, thanks again. I appreciate everyone for joining in. Love to hear what MCP servers you're using. And if there's anything you want to share and make sure others can learn from your knowledge and experience, please ping us at podcast at, tessl.io , and let us know what you are using.
[01:03:42] Simon: Thanks again, Alan, and thanks all for listening. Tune in next time.
In this episode of AI Native Dev, host Simon Maple and developer advocate Alan Pope delve into transforming ideas into shippable software using modern agents and MCP servers. Alan shares his journey from struggling with half-finished projects to leveraging LLMs and technical specifications for predictable and maintainable development. Listeners will learn how to use tools like TypingMind for multi-model workflows and MCP servers for real-world capabilities, streamlining processes from video transcription to SEO-ready content creation.
In this hands-on episode of AI Native Dev, host Simon Maple welcomes developer advocate and community leader Alan Pope to explore how modern agents and MCP servers turn ideas into shippable software. Alan traces his journey from “not a developer” to publishing working tools by moving from prompts to true specifications, then wiring agents to real-world capabilities via the Model Context Protocol. Along the way, he demonstrates TypingMind as an extensible hub, multi-model workflows with Claude, Gemini, and ChatGPT, and practical MCP servers like a YouTube transcript downloader built on yt-dlp.
Alan’s origin story is familiar to many builders: lots of ideas, difficulty getting started, and a backlog of half-finished projects. His inflection point came when LLMs made it easier to bridge the gap between “what I want” and “something that runs.” The key shift wasn’t just better prompting—it was writing technical specs that encode architecture choices, dependencies, and conventions. Instead of a loose design doc, he now writes actionable specs that state, for example, “use Python with uv for dependency management, store data in X database, expose a REST API with these endpoints,” and include the expected code structure.
This spec-first approach helps LLMs implement predictable scaffolding and best practices because the constraints are explicit. Alan emphasizes that specs don’t have to be exhaustive to be effective; they need to capture decisions that unblock implementation. For developers, that means encoding everything the agent should assume—tooling, frameworks, API shape, data models, tests, and quality gates—so the model can produce a project that compiles, runs, and is maintainable, instead of a one-off script.
Agent experiences have matured quickly. What once required copy/pasting code out of a chat and wrestling with context limits now often ships with an embedded editor, a test runner, and tool access inside the browser. If you’re new to coding agents, Alan says it’s fine to begin where you already are—ChatGPT or Claude in the browser—and then graduate to a more configurable hub as your needs grow.
A standout option is TypingMind, which preserves the simplicity of a chat UI while letting you choose the underlying model per-conversation (Claude, Gemini, ChatGPT) and even fork a conversation midstream to a different model. That flexibility matters when one model is better at reasoned planning (e.g., Claude Sonnet), another at retrieving facts or code synthesis, and a third at formatting or summarization. Developers can start a build with one model, branch to another for research or refactoring, and keep the overall context. This multi-model strategy reduces dead ends and lets you leverage each model’s strengths without context resets.
The magic happens when agents can act. MCP (Model Context Protocol) servers expose capabilities—file access, web crawling, APIs, CLIs—that the agent can invoke safely and predictably. Alan highlights a concrete example: a YouTube transcription MCP that wraps yt-dlp to fetch transcripts. Instead of context-switching to a shell script, he can stay in the chat, ask for the transcript, and chain downstream tasks like summarizing, extracting keywords, and drafting a blog post. The agent orchestrates tools; you stay in flow.
Other MCP servers he calls out include Firecrawl for structured web crawling and “sequential thinking” utilities that encourage the agent to plan work in discrete steps before execution. The takeaway is clear: make your agent a router for the best-in-class specialized tools. Practically, that means:
As the catalog of MCP servers grows, developers can assemble custom stacks that map to their workflows—research, data extraction, code generation, CI hooks—without hand-wiring brittle scripts.
TypingMind doubles as a project cockpit. Alan shows how he created a “Write like Pope” agent by uploading decades of his blog posts into a knowledge base. The result? When he prompts, the agent adopts his voice: TL;DR sections, subheadings, and the right cadence. Beyond writing style, knowledge bases can hold documentation, API contracts, or existing code so the agent has the right local context when generating or refactoring modules.
One standout integration is the File System tool. Even from a web UI, you can grant the agent access to specific folders so it writes code directly to your machine. That eliminates the “download a ZIP from the chat” loop and preserves your editor, version control, and terminal-centric workflow. Give the agent a spec, run it through a multi-step plan, and watch as it scaffolds folders, initializes uv for dependencies, writes tests, and fills in modules. Combined with model switching, you can start planning with Claude Sonnet, fork to Gemini for data gathering, and finish formatting or testing with ChatGPT—all under one conversation thread.
A concrete workflow Alan shares ties everything together. As a DevRel task, he frequently turns videos into blog posts and social content. Previously, he used a manual shell script around yt-dlp. With MCP and TypingMind, he now runs the entire pipeline in a single chat:
This pipeline illustrates a broader principle: keep humans on high-value edits and decisions while outsourcing I/O and transformation steps to tools. Developers can replicate the approach for docs-from-PRs, release notes from commit logs, or even data-to-report pipelines—anywhere transcripts, text, or structured outputs need to flow through research, summarization, and formatting stages.
Alan’s message is optimistic and practical: LLMs and MCP servers don’t replace engineering judgment; they compress the effort between a well-formed spec and a working system. Start small, codify your preferences, and let agents handle the glue so you can ship faster.

MCP: The USB-C
For AI
7 Oct 2025
with Steve Manuel

THE END OF
LEGACY APPS?
2 Sept 2025
with Birgitta Böckeler

SPEC-DRIVEN
DEVELOPMENT
WITH KIRO
19 Aug 2025
with Nikhil Swaminathan, Richard Threlkeld