
"Agent Therapy"
for Codebases
Also available on
[00:00:00] Simon: We have a second 30 minute keynote before we jump into main tracks, and that's so that next keynote is gonna be from Sean Roberts from Netlify, who’s the VP of Applied AI. Now this is a really interesting session because I think when we think about how we have built software over time and we think about our core, kind of like ICP, when we build that software, we think about the user and the user being.
[00:00:35] Simon: Very often a human, a developer or just a consumer of our services. But times have changed with the adoption of agents being, you know, so in our face with everything AI. It's important that when we think about providing these services, providing these tools, we think about agents as users of that as well as developers.
[00:00:57] Simon: And so this session is really gonna kinda like compare the two [00:01:00] and talk about how we can plan with agents in mind as our users. So please welcome on stage Sean Roberts.
[00:01:14] Sean: Alright. Hello everyone. As Simon said, I'm Sean Roberts. I lead the AI programs at Netlify, and I've been working, trying to make the web better, more secure, faster for a very long time. Especially at Netlify, I've been really trying to figure out how we can provide the best agent experience that we can for our platform, but also trying to figure out how the web and agents can work really well together.
[00:01:41] Sean: And so with that, you know, let's dive into the talk today around agent experience. I don't know the makeup of the crowd, but if you don't know what this is referencing, that's not a me problem. You know, take a picture, share with your friendly neighborhood AI agent, and they will tell you it's definitely [00:02:00] in the training data.
[00:02:01] Sean: But really what we're talking about today is how agent experience is an extension of developer experience. A quick raise of hands or horns here. Who is a senior plus dev or in charge of a code base today? A bunch of you. Yeah. Horns? Yes. Alright, raise your hands again if you feel like you are a steward of the developer experience of your code base today.
[00:02:27] Sean: Yes, yes. All right. Well, thank you for your service. This is often a very thankless task, but it's incredibly important because, as we know, developer experience really is the sum, it's greater than the sum of all of its parts, right? To have a great developer experience, you really need to have good documentation and onboarding.
[00:02:49] Sean: here's an intuitive design that goes into it. It's an intentional act that you put into it, but the reality is, if you're allowing people to develop on your platform [00:03:00] or on your code base, you have a developer experience. That does not mean it's good. It just means you have one. Just like every software has a user experience if they have users, and with agent experience, it's very similar, right.
[00:03:14] Sean: We have hit this critical mass where it's not a question of whether or not agents are working on your code base or on your platform. They are. The question is whether or not you are supporting those. And when we talk about agent experience, we do mean that kind of holistic experience that an agent has interacting with your system or your code base. And to illustrate this, right?
[00:03:38] Sean: Again, you know, when we think about developer experience, we think about these developers over here who are handcrafting beautiful, error free artisanal code. It's great. And the quality of them going from here to here is really determined by the quality of the developer experience. [00:04:00] And now we have developers who are delegating work to agents to do parts of this on their behalf.
[00:04:07] Sean: And when we talk about agent experience, we're talking about this connection here. So how well are they able to achieve this same amount of work. And this connection is all predicated on how well the agent experience is. And I think it's very important to emphasise we're talking a lot about the agent's experience here.
[00:04:28] Sean: But this is all to benefit the human developer. On the other side of it, this has nothing to do with replacing developers. There are developers delegating work to an agent, and the more successful we can make them, the more successful we can make them. Right? That's the whole point of this. It's also worth noting that when we talk about agent experience, we're talking about it today in regards to a code base.
[00:04:53] Sean: But an agent experience is this discipline that spans much broader, right? Going forward, [00:05:00] e commerce is gonna have to figure out how they serve their customers who are using agents to buy on their behalf. Flights, how they book flights and airline tickets when an agent was used to delegate.
[00:05:14] Sean: So it's not specific to us, but it's very important as right now agents are best at working with our code bases. But as Guy described, they have a lot of room for improvement. So one more time, who’s in charge of the developer experience of their code base here? Yeah. Oh, a lot. Left hands. A lot of people decide not to, but now every one of you holding your hands up, and probably everyone in this room really, you are also in charge of the agent experience.
[00:05:47] Sean: It doesn't like, maybe that used to be a question mark before. It's not anymore. Like it is a part of serving developers is acknowledging the fact that developers are using agents to also work [00:06:00] with your code base. So you have this charge now. It's very straightforward, but having agent experience and using agents isn't a new thing.
[00:06:17] Sean: But where we are today is this thing I call this mini one person band problem. Like we've all, you know, have gone off. Our teams individually have gone off, figured out what works for them, and they have kind of culminated like their view of the world and like they have their whole setup, right? And then two days ago, Gemini three came out and someone with 10 years of experience using anti gravity IDE said, you should stop what you're doing here and then switch to this.
[00:06:36] Sean: And so someone will, and then now your team kind of expands its surface area and so forth. And then, you know, there's probably gonna be another tool next week and the next few weeks, and we get this kind of sprawl.
[00:06:52] Sean: We have individually gone out, figured things out, and we've built this. This is the kind of mini one [00:07:00] person band problem that we see with developer AI native tooling today. How many of you have gone to a band where there's like four or five of these playing at the same time or gone to a concert where there's four or five?
[00:07:12] Sean: Yeah. Zero of you, cause they can't. And because this is not a band. These are three individuals maybe trying to do their own thing. Maybe two’s the same end, but this is not a band. Okay. This is a band. So awesome. Again, ask your AI. This is a band. They have purpose built tools.
[00:07:37] Sean: They are all trying to come together for the same goal, to make great experiences for their customers, their audience, right? And that's really important because together they're able to do far more than any one person band, right? And you know, for those who like the [00:08:00] orchestra metaphor better.
[00:08:01] Sean: There you go. Alright, so now what is it that we actually need to do in order to have a better agent experience, to improve the developer experience of our code bases? And thankfully Guy did a lot of the examples and homework for me earlier, so that's awesome. But I'll dive into the other parts of it.
[00:08:26] Sean: I think what's really important for us to make sure that it's clear on is there's no specific solution. I wish we could say this, if you do this, you're done, pack it up, let's go home. But there's not. Agent experience is a discipline, not a specific tool. Okay. And just so if you're thinking about, okay, I have to support agents, so I just need an MCP server.
[00:08:51] Sean: That's all I need. Check the box, done. That's thinking far too narrowly about this problem. And whether it's MCPA to MCP, these are all solutions that are kind of under the umbrella of an agent experience. But the other reality is we're all still figuring it out. We have a lot of ideas about what works today, and then we're gonna prove those wrong tomorrow, and then we're gonna keep iterating on it.
[00:09:17] Sean: But we're right in the middle of it. And so where should you actually start? This is gonna be my recommendation around the kind of crawl, walk, run stages that you can take on to start, or either start your journey or expand it going forward. So the first thing, like if your team is just a bunch of one person bands or not doing any of this at all, start with figuring that out, right?
[00:09:44] Sean: Like I think that there's a hunch. Yeah, my team's using AI, whatever, like we’re AI native. It's like, no, figure out what they're using. Ask them. Figure out what's working, what's not. And this doesn't have to be a big effort. Get a Google form, whatever, [00:10:00] vibe code it, and just ask them what agents are they using?
[00:10:04] Sean: What's working well? And to be clear, don't ask them the question that we asked them a year ago, two years ago. We're like, have you tried any agents? It's like, no. Everyone's probably tried an AI tool. But what we want to know is what people are using every day or every other day, what's working, what's not.
[00:10:21] Sean: And it's important to understand that for many reasons, but not least of which is the fact that there are lots of AI agents and tools and combinations of them. And then all the work that you're gonna be doing to improve the agent experience. This helps scope it much better. Just understand what people are doing, what they're using.
[00:10:40] Sean: Spec driven development, that might be a little different looking of an agent experience than if they're using some similar other tool or they're really into stuff like Lovable or Bolt, new things like that. So audit, audit what people are doing, figure out what's going on, and then start to build your internal community, right?
[00:10:58] Sean: All that really, you know, has [00:11:00] to, all that is really involved in doing that is getting them together every, you know, infrequently, but get them together to talk about the issues that they're running into. Talk about the tools that they're using. If you really wanna learn something cool, come up with a problem or take something, some, take some ticket off the board and say, hey, I'm gonna solve this together and I'm gonna watch you solve it your way.
[00:11:21] Sean: I'm gonna solve it my way and we're gonna, we're gonna learn together. And that's some version of, of pair programming here, where there's also an AI agent involved. And that is so illuminating in terms of like how, what people understand differently about the different, either the code base, how they approach the problem, and it will also start to build up your internal community to build towards an agent experience.
[00:11:48] Sean: So taking the next step forward, right? You know, the agents that you need to support now, you need to help them be better. Guy did [00:12:00] a fantastic job of explaining this for me earlier with, with, with some data around backing up that the LLMs themselves are not sufficient in terms of understanding your code base.
[00:12:12] Sean: And the reality is, if there is a gap in knowledge, it will confidently make things up or decide for you. It's not always made up or false, it just might just be totally different, like his example of picking a theme that was not the right theme. To combat this, right, context files are where you need to spend your initial time, right?
[00:12:33] Sean: Either, you know, depending on the tools, right? Agents, MD files, steering files, cloud code, cloud files, cloud MD files. These are going to be very important nuggets of information. And, you know, some rules of thumb that I think about are if, if you couldn't simplify something, make it obvious, right?
[00:12:55] Sean: So mini code bases, you see like, okay, to run a test suite, [00:13:00] it's this magical command that you gotta run. That's not obvious. If you have those, make sure that they're in your context. Other areas are like if you have a decoupled architecture, so you have a front end that's decoupled from your backend, which might be decoupled from your data layer, et cetera.
[00:13:17] Sean: It doesn't necessarily know those connections and you need to make those clear. Or it will make those connections. It'll make them up every time. And that's gonna be incredibly important. And especially whenever you get into this world of, as people are iterating on this and they start finding that, hey, it, it, it, it, something that we thought was obvious wasn't, it's gonna keep making those things up and adding those into your agent's files and steering files is gonna be incredibly important.
[00:13:53] Sean: The next thing I mentioned is the core dependency documentation I had mentioned. The core dependency [00:14:00] of the front end has a dependency on this back end. It's over here, here's how you use it, here's how you find information about it. And that's incredibly important. But there's also this problem of dependency documentation and like third party modules, things like that.
[00:14:15] Sean: Even first party separated modules. And Guy went in a lot of detail about that. This has been bothering me for so long. Because, you know, the reality is, especially with brownfield type of projects, is I have developers, or the model is trained on version one of my API, the latest version is version three, but all my developers are using version two.
[00:14:40] Sean: And so like it messes it up nine times outta 10. And the tooling for getting this right isn't great today. I was actually writing this, I was very stoked to see Tessl's registry and things like that that they're doing. And so I'm gonna explore that more. I think that's very [00:15:00] exciting and I definitely agree that there's actually more onus needs to be put on the open source community to deliver these things.
[00:15:08] Sean: The way I kind of see the Tessl registry is like, well, if every dependency had to have its types hosted somewhere else, right? Like really the types, the documentation for it really needs to be there. But we also have the second problem where especially open source repos, there's people who build on them and then there's people who use them.
[00:15:29] Sean: And those are different documentation. And so like even it still gets a little bit harder and harder the more you're trying to optimise. But again, we're all figuring this out together, so that's fine. If you use Context Seven, that's also something that people use. I've run into issues with it not doing very well on the versioning side of things.
[00:15:49] Sean: But regardless, if you have a key dependency that's a third party module, make sure it's in your context, make sure you're using it right, especially if it's things like data [00:16:00] based, data based, because it's just such a dense topic and tool set that it ends up getting pretty confused with those types of tools.
[00:16:11] Sean: So make it clear, and especially for very large code bases, the concept of knowledge graphs. There's many different implementations of them. Cognition released Deep Wiki and Code Maps recently. But the reality is if you have a really large code base and you're asking it to do things, every time that you try to ask it to do something, it's gotta kind of figure it out on its own.
[00:16:37] Sean: And the larger the code base, the longer that takes, the more expensive it is, the more likely it is it's gonna miss parts or get confused. And this is where knowledge graphs really come in, where you can kind of pre produce this kind of understanding of the overall architecture, how things relate to one another, and summarise those.
[00:16:55] Sean: And so before it tries to do things, it consults there first, and those are very helpful [00:17:00] for larger code bases. Feedback loops are gonna be incredibly important. So if you're already getting your internal community together, working with them to say, look, whenever we are having an issue, if a human had to manually intervene in an agent's workflow, get that documented, get that into context. If you're talking to that senior engineer who's been there for 15 years and has all the tribal knowledge in their head, if every time you have to consult them, if that doesn't go back in documentation or context.
[00:17:33] Sean: I don't know what you're doing, but like that, that's this internal feedback loop. You have to have some forcing functions to make sure that your agents understand, because unlike you, they can't go ask necessarily that tenure developer who has all the information in their head. So establish feedback loops with your internal team.
[00:17:53] Sean: One practice that I really enjoy doing is I have this MCP where it's like, okay. If the agent's really [00:18:00] going off the rails here, I look back and say, what could we have done better here? It's like an agent therapy session. Like, what should we do better to make sure that you're gonna get this right next time?
[00:18:11] Sean: And specifically update the context files to make sure that happens. And then especially once we do that, assuming it makes sense at the time, having it try again and then go from there. I have a lot of good results with that as well. At sure, most of you, if not all of you, are using AI review tools in your code bases and your CI pipelines and such.
[00:18:36] Sean: So I won't go too deep into that, but like the one thing that I think is an addition on top of that is not only having your AI review tool figuring out what's wrong with a thing, but also suggests when there are patterns that were missing from your context files that would've prevented this from happening in the future.
[00:18:53] Sean: So like this is, you're getting towards the step of trying to automate. Some of this feedback loop [00:19:00]
[00:19:00] Sean: Ephemeral environments are something I'm a huge fan of. You know, at Netlify we can, we do deployments for like pre prod deployments or preview deployments. And we can do as many of 'em as the agent needs or as the developer needs. And they're amazing feedback tools for everything that's not specifically code related, right?
[00:19:21] Sean: So if it's like the style of a thing or the layout of a thing. If it's just making stuff up, you can both have the, the human review these things as, or as well as you know, if you want to give access to the the agent to review these things on your behalf. These really, and then combining these things, so an agent figures out, okay, I messed up the layout here.
[00:19:40] Sean: I'm gonna go fix it. I'm also going to make sure I document this in the context. Now you're really starting to see a full end to end picture, and this is ultimately what you kind of end up with. With, with a good agent experience and feedback, lifecycle so developers can delegate, works to to, to the agents, they generate the code.
[00:19:58] Sean: And this, again, [00:20:00] this being the, the agent experience that they have access to here. They are better at doing this because the feedback loop of making sure the context is good, appropriate, for, for the jobs that they're doing and so on. They can, you know, you have your, your reviewer agents figuring out, Hey, this should have been done differently.
[00:20:20] Sean: I'm gonna make sure that they know that going forward. I can also make sure that they can review the actual changes, not just the code, but also the output of that code, and then make suggestions for doing better going forward. Okay. And that, as we get into the, you know, you're really advancing the agent experience of your code base.
[00:20:44] Sean: Shared subagents is a really cool unlock. So shared subagents. These are gonna be like Guy mentioned earlier. These are as simple as just markdown files of defining, hey, this is a [00:21:00] specialised agent. Cause by default, all these agents are generalists, right? And as we know with full stack developers, they can get the job done, but a specialist is probably gonna get it done at least a little bit better, if not completely better.
[00:21:12] Sean: These are your specialised subagents. You define them in a markdown file. And then you can say, okay, this is our methodology for design. This is our methodology for copywriting. It has our voice and tone in it. This is our data migrations subagent. They're specialised at doing this. And what's gonna be really the big difference that you're gonna see here is that when you ask it to do something and you have subagents,
[00:21:40] Sean: it's gonna say, oh, actually it's better for this one to go do it. And it will kick that one off, and it has specialised context. Now, imagine you had five specializations that you wanted to really hone in on and tune, and you had to throw all that context into a single Claude MD file or agent's MD file. [00:22:00] That does not work well.
[00:22:02] Sean: And that's kind of why we have these. Now we've figured out that that doesn't work well. You overload the context, it gets confused, it starts to steal attention and things like that. So this is shared. So subagents are very important and really allow you to have a shared way of doing things across your code bases and across your different developers.
[00:22:24] Sean: You have your context files, you have your steering files, and now you have subagent files. You're immediately gonna run into how do I distribute these things? Because as soon as you have more than one repo, as Guy was mentioning, there are things that are repo specific that always should be in the repo, and then there's gonna be things that are global.
[00:22:47] Sean: Your voice and tone for your brand, that's a global thing. Your brand logos, all that stuff is global. How you might do reviews, all those things are global. But how do you share that across [00:23:00] your developer base, which might be dozens or potentially thousands of people? And my preferred methodology personally is to centralise the globals into a shared repo and then have a CLI that people can run to update them and do those things.
[00:23:13] Sean: And I also learned through Guy's talk that they have this capability on the Tessl platform as well. But I think that once you like, this is a good problem to have. Okay, we've defined rules, they're working really well in our code bases. We actually need to propagate this so that we're working better across multiple code bases.
[00:23:31] Sean: And centralizing these outside of a single agent, this Guy also mentioned, is incredibly important, right? Because if it doesn't, like you did this audit step one, unless they all said, I only use cloud code and they all only use Codex, then picking one agent is a recipe for disaster.[00:24:00]
[00:24:00] Sean: Also, if they're all only using one agent, you're probably already setting yourself up for a future problem. But yeah, so centralise these. If you don't use a platform like Tessl, you can use a centralised GitHub repo that also has a CLI to run to do the sync.
[00:24:21] Sean: And I think really on the bleeding edge of optimizing your agent experience, we have these opportunities to run evals on the AX of our code bases. And if you haven't used or done any agent evals, all they really are is putting an agent under certain conditions, asking them to do certain tasks, and evaluating the performance of it doing those things.
[00:24:47] Sean: But have you ever considered taking Claude SDK or Codex SDK, throwing that in an eval engine, saying, hey, try to do some stuff on my repo, and then help me figure out what I should [00:25:00] tell you to do that better? You're doing these things wrong. You know, Guy and the Tessl team have done a lot of this on the open source side of things, but doing this for your own repo is incredibly important, incredibly validating.
[00:25:13] Sean: It will really help you close the gaps between what's just clear in your documentation today and what's still missing. In addition to helping you fill gaps, it also helps you solve the one way ratchet problem, or it's Asimov’s law of bureaucracy, which says that laws and regulations, once they're enacted, monotonically just grow.
[00:25:44] Sean: They never contract, and they never substantially get repealed or reduced. And we are seeing that happening with config or with context rules and context as well. They're just growing. We're like, oh, I missed this thing, I'm gonna keep adding to it. So how do you safely feel confident?
[00:26:02] Sean: And peeling back some of those requirements. And how do you do, how do you, you know, you fix this issue, but you cause other issues. How do you feel confident about not doing that? And I think evals for, for the aging experience is gonna be incredibly important. Cool. So we covered a lot of ground in terms of, you know, the developer experience.
[00:26:24] Sean: We, we all understand. That's our charge. Now, we're, we're, we've been in charge of the developer experience for our code bases, but agent experience is no longer optional. That's a part of what we have to deal with now. Not because we want to support agents, but because we want to support our developers who use agents.
[00:26:41] Sean: Okay. And making this, I was really, stoked on, on using the band a whole metaphor because I think what's really important about bands is the size of them doesn't. Determine the success of them, right? It doesn't matter how big your band is, but what does matter is how well they can perform together and, and produce, great outcomes.
[00:27:05] Sean: And so when we think about all the things that we've heard with like, oh, if you use these tools, you'll be a 10 x developer. So now we get one, one person band, and this other 10 x developer wants to be their own one person band. But what's really magical is, is having, a 10 x team. Right? And and that really comes when, when we're all sharing knowledge and we're all working together.
[00:27:26] Sean: As, as the, the, as Bon Scott said from AC/DC, it's a long way to the top, if you wanna rock and roll, but I would amend that to say. It's a much more enjoyable ride if you're with your friends and with your band and you're going together. So with that, thank you all. Thank you for coming here. Drink some water.
In this episode of AI Native Dev, host Simon Maple speaks with Sean Roberts, VP of Applied AI at Netlify, about the emerging field of Agent Experience (AX) and its significance as the next evolution of Developer Experience (DX). They explore how developers can enhance their workflows by designing codebases to support AI agents as first-class users, emphasizing the importance of standardizing toolchains, making context explicit, and maintaining a continuous feedback loop for improvement. Sean highlights that AX is not about replacing developers but empowering them through strategic tool and process evolution.
Agents are now first-class users of your codebase. In this AI Native Dev episode, host Simon Maple talks with Sean Roberts, VP of Applied AI at Netlify, about why Agent Experience (AX) is the next evolution of Developer Experience (DX). Sean argues that developers increasingly delegate work to AI agents, so teams must intentionally design for an “agent user” the same way they design for humans—without losing sight that the end beneficiary is still the human developer.
Sean frames Agent Experience as an extension of Developer Experience: if developers are building on your platform or codebase, you already have a DX—good or bad. Now that agents are actively participating in development workflows, you also have an AX—good or bad. The strategic question is no longer “are agents touching our code?” but “are we supporting the agents our developers rely on?”
Crucially, AX is about empowering developers, not replacing them. Think of the developer as delegating a chunk of the build to an agent. The quality of the result depends on the connective tissue between the human’s intent and the agent’s ability to operate on your system. That “connection” is AX: everything the agent needs to perform well, from documentation to clear APIs and explicit architectural maps.
Sean emphasises AX is a discipline, not a single product switch. It’s tempting to “check the box” by adding an MCP server or wiring up a protocol, but protocols like MCP or MCPA are just plumbing. What matters is the holistic experience—how easily an agent can understand your codebase, discover dependencies, pick the right patterns, and execute tasks without guessing.
Most teams have drifted into a “one-person band” pattern: every dev picks their own assistant, plugins, or IDE, and the stack sprawls with each model release or tooling trend. One teammate just adopted the latest model; another is deep into a specific spec-driven agent; a third prefers a new AI-native IDE. The result is inconsistent workflows, overlapping costs, and brittle support expectations for your platform.
Sean suggests moving your org from a loose jam session to an orchestra: purpose-built tools intentionally composing toward a shared outcome. That means actively curating which agents you support, defining the workflows they target, and documenting how they’re supposed to operate on your codebase. A cohesive band can do far more together than any number of solo acts.
This standardization doesn’t mean picking one agent forever; it means having a deliberate, evolving toolchain. Expect a fast cadence (e.g., new releases like Gemini 3), but treat changes like product decisions, not experiments. Pick a small set of supported tools (e.g., a spec-driven agent versus a new app builder like Lovable or Bolt) and define when and how to use them. Make sure your platform is ready to serve those agents with the context, constraints, and structure they need.
Crawl: Audit actual usage. Don’t ask if people have “tried” AI—assume they have. Ask which tools they use every day or every other day, what they accomplish with them, and where they get stuck. A simple survey (Google Form, internal form) is fine. The goal is to discover the agent patterns in your org: spec-driven development, app generators, IDE copilots, or autonomous task runners.
Walk: Build an internal community. Convene users regularly to share tips, failures, and successful patterns. Run a “pairing with an agent” working session: pick a real ticket, let two devs solve it with their preferred agent approaches, and compare. This spotlights gaps in your AX (missing docs, unclear scripts, version mismatch) and helps standardise on the patterns that win in practice.
Run: Scope support around reality. If your audit shows heavy usage of spec-driven agents, invest first in machine-readable specs, tests, and integration contracts. If users love app-builder agents like Lovable or Bolt, prioritise scaffolding, project templates, and CI hooks that agents can discover and invoke. Treat this as product management for AX: choose where to be great first, then expand.
Agents fill knowledge gaps with confident guesses. They’ll choose a theme you didn’t intend, wire to the wrong API version, or assume a monolith when you run a decoupled frontend-backend-data stack. The fix is to make context a first-class artifact that agents can reliably consume.
Start by codifying “agent-facing” docs: agents.md, steering files, cloud code or cloud.md files—whatever your toolchain reads. Include non-obvious commands (e.g., how to run tests), expected workflows (e.g., “always run unit tests before opening a PR”), and explicit architectural maps. If your frontend depends on a backend service and that in turn depends on a data layer, be explicit about services, endpoints, auth, and how to navigate repos. Don’t let the agent infer; define.
Maintain context like code. As your team discovers pitfalls—ambiguous naming, flaky scripts, confusing integration paths—patch your agent context files. Treat them as part of your developer platform, with code review and versioning. Combine them with “guardrails in code” where possible: script commands, health checks, and scaffolded templates that anchor agent behavior in executable truth rather than prose alone.
Brownfield reality is messy: your model may “know” v1 of a library, your codebase runs v3, and your developers use v2 in several services. Agents stumble when dependency docs, types, and examples are out of sync. Sean calls for better machine-readable dependency metadata and applauds emerging approaches like Tessl’s registry, which aims to host authoritative, agent-consumable docs and types for packages.
Treat dependency documentation as a two-audience problem: builders vs. consumers. Open source repos often conflate maintainer docs (how to build/extend) with integrator docs (how to use/version/upgrade). Agents need the consumer side: precise API signatures, supported versions, migration notes, and usage patterns. If the OS community doesn’t provide it yet, curate an internal registry-of-truth that maps package names to the version your org uses, with canonical import paths, examples, and upgrade guidance.
Finally, remember that MCP/MCPA, IDE extensions, “Context Seven,” and similar tools are implementation details. They can help transport context, but they don’t solve AX by themselves. The durable solution is a discipline: curated toolchains, agent-facing context, dependency registries, and a feedback loop that continually turns observed agent failures into improved guidance and scaffolding.

Smaller Context,
Bigger Impact
26 Nov 2025
with Guy Podjarny

Agents Explained:
Beginner To Pro
28 Oct 2025
with Maksim Shaposhnikov

Why 95%
of Agents
Fail
23 Sept 2025
with Reuven Cohen