
Also available on
[00:00:00] Simon Maple: Hello, and welcome to another episode of the AI Native Dev. My name is Simon Maple, and I'm your host for this episode. Today, we're going to have a really fun chat about something called BMAD and the BMAD Method.
[00:00:12] Simon Maple: So this is, I believe, called Build More Architect Dream, which is a fun way of really talking about a spec driven approach to building software, where we really think about specifications first and then drive the code from there. It allows us to really think about how we create our intents versus thinking about the how too early.
[00:00:35] Simon Maple: And we capture those behaviors and those constraints very, very early in the process. And taking us through this is Cian Clarke from NearForm, and Cian and I have done a couple of things, including actually we share the stage at AI Native DevCon in New York in May.
[00:00:54] Simon Maple: And we also did a meetup very recently as well in London. Cian, welcome to the podcast. How are you?
[00:01:01] Cian Clarke: Yeah, thanks so much for having me. I'm doing well.
[00:01:03] Simon Maple: Brilliant. Brilliant. And so you're Head of AI at NearForm, right? Tell us a little bit about what that entails.
[00:01:09] Cian Clarke: Yeah, absolutely. So it's really about realizing customers ambitions in the world of AI.
[00:01:14] Cian Clarke: And more recently, of course, lots of generative AI. That's been something that's been keeping us pretty busy. And so that might look like, you know, unearthing a bunch of proprietary knowledge that some customer might have in their product to their end customers. It might be, you know, building some sort of system to make their teams go faster.
[00:01:29] Cian Clarke: But of late, we've also been thinking an awful lot, not just about what we build in the space of AI, but also how we build it. And so NearForm sort of came of age in the era of Node.js, and that's been sort of the technology we've been known for.
[00:01:52] Cian Clarke: And actually, we've been thinking a lot lately about AI native delivery and delivering with AI models and the enhanced capabilities of coding models and spec driven. And so that's really brought our paths together, I suppose. Since we are very interested in all of the tool sets and ways to build software artifacts through AI, so, you know, more about the how we build as well as what we build with AI.
[00:02:11] Simon Maple: Which is super interesting, 'cause, like, you know, there's a lot of obviously kinda like, you know, consultancies and people like that who are really not adopting AI in a very interesting way. But you're really kinda like leading a lot of the way with a very, you know, spec oriented and future thinking path of building with AI.
[00:02:28] Simon Maple: So we'll explore some of this in just a second. Why don't we first talk a little bit about BMAD? What does BMAD mean to you?
[00:02:36] Cian Clarke: Yeah, absolutely. So to me, it's sort of the best approximation we have right now of what a comprehensive spec driven workflow looks like. And so there's a tagline attached to BMAD of Build More Architect Dream, but it's also sort of named by the guy who created it, Brian Madson. So, BMad.
[00:02:54] Cian Clarke: And it's this really interesting open source project, which I think is quite important to us at NearForm, since a lot of our pedigree comes from the world of open source and contributing upstream to things like Fastify and Node core. And so, yeah, the fact that this is an open source framework that is also open to the selection of model vendor and open to the selection of IDE tool set as well is something that really, really appealed to us.
[00:03:17] Cian Clarke: And in my head, actually, at its very core, what it's about is just, like, really intelligent context engineering with a model. And that's kind of what spec driven is all about, at least in my head, you know, giving a model the best context possible to go and autonomously build software.
[00:03:36] Cian Clarke: Yeah. And it, as I say, best approximation I've seen to date of how to go about doing that.
[00:03:45] Simon Maple: And I didn't know that about the name, the BMAD. That's Brian Madson, is it?
[00:03:49] Cian Clarke: That's it, yeah, that's it. Madson. I think it is.
[00:03:51] Simon Maple: Madison. Sorry, Madison.
[00:03:53] Simon Maple: Yeah. So, so, so it started as a name, and it was searching for an acronym that really fit.
[00:03:58] Cian Clarke: I think so, yeah, exactly. The acronym now is a bit of a stretch, I feel. Now the Build More Architect Dream.
[00:04:03] Simon Maple: Makes a ton more sense. I was looking at that and thinking, wow, where, where did this come from?
[00:04:08] Simon Maple: But now it makes a lot more sense.
[00:04:09] Cian Clarke: But it's a catchy name, yeah. And it started with this person, but, you know, there's a huge suite of contributors now on this tool set, and it's this really vibrant open source project that's gone through a couple of different versions. UIt's got a really cool, the v6 version that's just released has a bunch of really nice developments in terms of being able to select the scale of project you're working on. It's got a module system where you can bring your own roles. So it's really evolved massively over the last kind of couple of months.
[00:04:37] Simon Maple: Yeah. Yeah. Let's get into that in a little bit, but I guess the one question that I'd love to ask, 'cause I know Nearform, you know, do a lot of work in kind of like the discovery of various, you know, a lot of the research and discovery of various tools.
[00:04:49] Simon Maple: I know you're big fans of Kiro, for example. I know we've talked in the past about using Tessl for, you know, discovery in context and learning about, you know, how agents can be, you know, much better enabled. Where did you, where did you first learn about, where did you, how did you kind of like go through the discovery period with BMAD?
[00:05:07] Cian Clarke: Yeah. So it sort of came out of our own internal discovery process, really. And we would do very frequently in consulting this kind of discovery process that we call Ignite, which is about aligning a bunch of stakeholders and what exactly it is that they want to build. nd, you know, in the world of consulting, we call this discovery, but, like, it happens in every company, basically a bunch of folk getting together and talking about what the requirements are for the thing they're looking to build.
[00:05:33] Cian Clarke: Mm. What's really important, like what levers within the business is it going to impact, and being able to document some stuff, you know, really, really well. And we noticed that we could use things like AI note takers within the meeting to be able to capture a lot of that context and then distill it down into playback decks and also distill it down into requirements documents.
[00:05:55] Cian Clarke: And so it was the folk who were facilitating these sessions that observed that actually you can get really good requirements documentation and backlogs off the back of these meetings, just talking about how and what it is exactly that we wanna build. And I know the folks in AWS do this a lot with their pre-read time and meetings when they're doing their PRFAQs, and now they've moved to this term called squad mobbing.
[00:06:19] Cian Clarke: So this is, you know, a really big
[00:06:21] Cian Clarke: methodology and industry as well. This idea of gathering requirements as a group and then using that as an artifact to then later go and build. And so we kind of spotted, you know, we had been vibe coding prototypes that we would later throw away, because that's what you tend to do with an artifact that's vibe coded.
[00:06:38] Cian Clarke: But actually, like, could we ground the process of using AI to develop with these artifacts, and that kind of aha moment happened probably about six months ago. And it wasn't me who had it, it's a member of my team, James, who helped out there and discovered this tool called BMAD along the way.
[00:06:56] Cian Clarke: That is really, really good at taking some of that documentation and using it to hone the output of a model. And so I discovered this spec driven thing and, as a pretty, like, skep, you know, vibe coding skeptic, this was sort of a revelation for me, because all of a sudden you're actually able to steer what the model did. And yeah, that was pretty powerful, I think.
[00:07:17] Simon Maple: Yeah, and a lot of the things that you're kinda like talking about here, whether it's spec driven or BMAD, they're really, they're really kinda like smart. And I guess, you know, they're solutions that are well thought out. But I guess when we go back to the root problem, what would you say is the need for these types of solutions?
[00:07:37] Cian Clarke: Yeah. For me, it came down to, like, forcing the world of vibe coding to sort of grow up a little bit and have some guardrails. And so to fill in that uncertainty with where the model would fill in the gap, instead have stakeholders fill in the gap and make sure the requirements were actually complete.
[00:07:51] Cian Clarke: Because the challenge with incomplete requirements going into a model when it's building a software system is that ambiguity at a minimum becomes some sort of, you know, hallucinated feature that the model has made up, and potentially becomes masses of rework and wasted tokens.
[00:08:12] Cian Clarke: And ultimately, you know, real kind of code smell in the repository as well. So the ultimate artifact winds up kind of harmed as well. And so I kind of think of it as, you know, almost like context engineering on steroids for software projects, and making sure that the model is as well as possible grounded in the what that it's building, as well as the how you want it to build.
[00:08:35] Cian Clarke: Being able to give that steer not in a one shot prompt, but in some documentation, that's kind of where I see spec driven and BMAD being really, really strong.
[00:08:45] Simon Maple: And what's really interesting here is kinda like when we think about what, you know, the way BMAD goes around trying to solve this, and specs, you know, if we think back well before kind of the AI boom here, you know, we were thinking about these types of things back in the BDD days and things like that.
[00:09:02] Simon Maple: And it's interesting that when we talk about the problem now, the problem is very, you know, the problem definition is peppered with AI terms. And I think now the problem is actually far greater because of those types of things.
[00:09:21] Simon Maple: So a lot of things that you say about the issues around vibe coding, those types of things, they magnify the problem, which, you know, like you say, vague requirements and things like that that we previously had, it's just magnified so much more because of the additional complexity that vibe coding and these types of agents can bring along as well.
[00:09:34] Cian Clarke: Yeah. Like, as engineers, if we had a gap in requirements documentation that seemed really obvious that would impact users, you know, we would go and ask a clarifying question of a product manager or somebody who's a stakeholder on the project.
[00:09:46] Cian Clarke: Yeah. And we'd be a little annoyed by it, like, come on, we're mid sprint. Why didn't you think of this? Yeah. But we'd fill in that gap. The model just makes shit up, right? That's not a great outcome.
[00:09:54] Simon Maple: Yeah, yeah. Yeah. Sometimes it is, but sometimes, a lot of the time, it isn't. Yeah. Yeah. Yeah. So, so let, let, let's go ahead and kind of like talk. Take us through an example, kinda like Nearform rollout, I guess, of a BMAD project, I guess.
[00:10:08] Simon Maple: Yeah, I guess there are certain times when it's worked better and certain times when maybe you've actually learned, actually, you know, it's not quite as good here. Talk us through, you know, one of the most solid kind of BMAD rollouts that you guys have done.
[00:10:20] Cian Clarke: Yeah. So it's sort of to date become the tool of choice for dev teams at Nearform who are looking to progress things in an AI native manner.
[00:10:28] Cian Clarke: Probably alongside Kiro as the other strong contender. And all along, the observation I had had was that it is not a great tool for actually kind of rapid prototyping and building a functional prototype that you plan to throw away at the end, because the process is quite heavy.
[00:10:46] Cian Clarke: It has actually made great leaps and bounds in v6's release in terms of enabling you to save the sort of level of project that you're looking to build. And if you're looking to build a prototype, it simplifies the workflow. I haven't had enough hands-on time with it to form an opinion on if that's been successful, but traditionally, rapid prototyping is still reaching for Bolt.new or something.
[00:11:04] Cian Clarke: But where we're looking to actually build some sort of MVP and something that's actually shippable in the greenfield, we found that absolutely fantastic. I think there are probably some challenges to date in the brownfield, although it is the best thing I've come across for working in large brownfield code bases.
[00:11:22] Cian Clarke: Nonetheless, there are challenges with that. But for greenfield projects, it's been fantastic. And the way I kind of try to approach this, so my background is not in consulting. I'm relatively new to the world of consulting. I've mostly worked in product companies through engineering leadership roles.
[00:11:41] Cian Clarke: And I like to think about rolling out these tool sets to teams of developers through the context of developer empathy. So we talk a lot about customer empathy in the sales process, but I think developer empathy is actually a very important thing to recognize as an engineering leader.
[00:11:56] Cian Clarke: And it's like this, when you were hands on keyboard, walking in the shoes of a developer, what stuff did you just hate? Like, what drove you mad? And so, like, you know, writing infrastructure as code stacks only to have the CloudFormation deploy roll back 15 times in a row and having to go back and delete stuff manually.
[00:12:15] Cian Clarke: And just that drudgery I used to hate, or, you know, fixing an automation test that's blown up because a pixel moved 10 pixels to the right in the middle of the night. Like, that's really frustrating. Not a good use of my time. Even documentation. So all of those tasks.
[00:12:33] Cian Clarke: You know, how can we introduce AI into the development lifecycle to make those things a lot less painful? So, like, what would developers really gravitate towards in trying to automate away some of that pain?
[00:12:56] Cian Clarke: And to that end, you know, something that I wonder is, like, do developers actually really enjoy writing documentation and specs, right? Because this whole methodology is based on spending a lot of time with a markdown file. And I think some do, some don't. But as I think about how to roll out both BMAD and a broader suite of MCP enabled tools that are connected into an IDE, I like to think about the sort of developer archetypes that are going to be interacting with these systems and what are they actually getting from it?
[00:13:23] Cian Clarke: What makes their life like that bit easier?
[00:13:26] Simon Maple: Super interesting. And yeah, it's interesting actually. I remember giving a session, I think it was the closing keynote at DevOps UK, and I gave a session about what a future world will look like with AI native development.
[00:13:40] Simon Maple: And they came back, they came up to me at the end of the, of the session, and they, and they were, they, you know, they had like worry on their face going, you know, you're, you're projecting a future here where, where developers just don't code. Is that true? And I'm, and I'm like, well, you know, honestly, I, I think you'll
[00:13:54] Simon Maple: be hard pressed to find anyone who will say at some, you know, code will always be a thing. And I, and I, you know, while you'll have some skeptics that will say, not in my lifetime, it's, it's gonna happen at some point. It's just a level of abstraction above where we are. The question is, I think when, and, and I think, yeah, it depends I guess whether a developer is
[00:14:16] Simon Maple: so attached to the coding or whether they're attached to the creation, like the, you know, the, the, the build, the complexity of, of the problem solving and the creation. And I always think people who, people who care about the creation, the construction, almost the, the, how I build this thing, in terms of how I architect it, how I think about various things, you know, integrating and building together.
[00:14:38] Simon Maple: That's all that's gonna be there. And hopefully, well, not going away.
[00:14:41] Cian Clarke: Absolutely. Yeah.
[00:14:42] Simon Maple: Hope, hopefully we'll abstract ourselves on top of that file still and we can still make, you know, have input into those discussions. Yeah. But I think for people who are like fully attached to code, that's, that's in my opinion, certainly, and I think a lot of people's, it's, it's absolutely gonna be reduced and, and, and, and someday most likely
[00:14:59] Cian Clarke: It’s a hard position to hold, isn't it?
[00:15:01] Simon Maple: It is. It's a very hard, yeah.
[00:15:02] Cian Clarke: An artifact is definitely becoming less important. That doesn't mean that the quality of the output of that artifact doesn't matter. In fact, all of the best practices that we followed all these years are just as important as before. And, you know, having code that is linted such that you can actually read it and maintain it is really important.
[00:15:19] Cian Clarke: And having code that is incredibly well structured is really important. And, you know, the art of selecting what dependencies that you're using, you know, don't definitely wanna be picking the latest version of Next.js, as recent events have shown us. Yeah. But the art of actually picking what modules, dependencies, and architecture that you're going to use becomes the part of the craft that becomes more important in my head.
[00:15:43] Cian Clarke: And, you know, the actual line by line writing and production of code is sort of the thing that becomes less. One of the funny things about Nearform is, as a business, we've always hired senior staff. So I think the average experience in the business runs about eight years, but lots of folks have 10, 15, 20 years of experience.
[00:16:01] Cian Clarke: And it feels like those types of developers are going to have a much more productive time in steering some of these tools, right? Like, it's gonna be interesting to see over the next couple of years what that does to the shape of general software engineering teams.
[00:16:17] Cian Clarke: One, you know, really unfortunate artifact we've seen of this AI native era is probably the, the, the, the kind of reduction in hiring pipelines and also what the, what the future looks like for junior developers.
[00:16:30] Cian Clarke: It's, it's become a bit of a challenging time and you know, that that's something that probably bums me out a little bit.
[00:16:35] Simon Maple: Yeah, and I guess there's a lot of kinda like, I mean, the fear, uncertainty, and doubt here is quite high in terms of what does this mean for junior developers. But I guess kinda like my opinion almost on this is, is.
[00:16:47] Simon Maple: I guess senior developers are more likely to be able to naturally switch over because they just have that experience and they have that underlying knowledge. I think the path of a junior developer is just gonna be different. I think they're gonna have to learn different things, and arguably I could almost say, well actually, by being a junior developer, you're gonna be naturally more AI native because that's the path you learn.
[00:17:10] Simon Maple: And so we're gonna probably have this switchover where the junior developers are actually gonna be somewhat more reliant, but also more happy, more, more, more. They will choose to use AI probably with a lower bar than more of a senior developer. So they're probably jumping into it more.
[00:17:29] Simon Maple: But that senior developer today certainly does have that kind of like more of that knowledge, particularly around when they're writing specs, what they want, and why they know why they want it that way.
[00:17:38] Cian Clarke: Yeah, and I'm even not as worried about this notion of, you know, all of a sudden teams just being seniors and the pipeline of junior talent drying up.
[00:17:46] Cian Clarke: I think the reality of market economics is that nobody's going to let that happen, and the need for junior team members is going to be very, you know, very quickly solved by hiring pipelines and so on. That that's gonna have to be balanced at some point in the future, and there might be a minor reckoning, but we're still gonna need junior developers that we grow into senior talent.
[00:18:05] Cian Clarke: Maybe that more progressive mindset of somebody who hasn't been doing this for as long is more open to the change that AI native brings, and actually it ends up being an advantage. Who knows?
[00:18:13] Simon Maple: Right. Yeah. Yeah. No, I very much agree. So, we talked a little bit about specs there as well. I guess when we think about from the developer point of view, the developers are gonna be potentially writing specs, maybe manually, maybe through agents or, or LLMs.
[00:18:27] Simon Maple: The specs ultimately will absolutely be consumed by agents, but we're at that funny crossroad where kinda like both an LLM and an agent and a human need to read this and need to understand it. Because one is generating code from it, the other one is trying to maintain it from the point of view of, you know, making sure the features are correct and well written, et cetera.
[00:18:53] Simon Maple: Who's the, who would you say needs to be the, almost like the primary stakeholder? Who's the person where this specification should be written for? A person. And I guess a little bit of a comparison here is when we write code today, we have two options. We either make it the most performant, most effective code possible, which means we are writing for, you know, maybe it's a JDK or the machine code that is the most performant, most effective possible.
[00:19:22] Simon Maple: That makes it far less maintainable. So we very often make decisions to make something more maintainable because we feel we are the ones who are trying to maintain this and read this as well. Let's make it most readable for us and let the compilers and everything else deal with a lot of the efficiencies after the fact.
[00:19:42] Simon Maple: So yeah, who is this primary artifact, the spec, written for primarily?
[00:19:50] Cian Clarke: Yeah, that's an interesting question, and I like the sort of code optimization comparison. I also love that you had to get the JDK in there. Some of your roots are shown through here, Simon. That's great. Yeah.
[00:20:01] Simon Maple: My, my JDK, my JDK gray hairs. Yeah.
[00:20:03] Cian Clarke: Yeah, yeah. I think that ultimately, for me at least, writing the specs for human consumption is absolutely vital, because that review phase is quite important. Mm. If you are writing outputs purely for the consumption of a model, you're probably not really reviewing what it is that's, that's been generated.
[00:20:21] Cian Clarke: You know, if you're generating a PRD off of a transcript and just blindly accepting it, in my eyes, you've probably missed the purpose of spec driven development in the first place. And actually the honing of that spec, however much some developers may not enjoy crafting markdown, is actually the really important part of this process.
[00:20:38] Cian Clarke: In terms of the model being able to interpret what's been written, I actually don't know how much of a consideration that's been in my experience. Models are incredibly adaptable for sure. You know, if we're putting in, like, I don't know, a PDF of a Gantt chart, we're probably gonna have a bad time. That's not gonna go well.
[00:20:56] Cian Clarke: But, you know, things like Mermaid.js syntax, which I've always done anyway, or even just the subset of web sequence diagram syntax, that type of thing in technical documentation is invaluable. If you've come of age as a developer in the world of building specifications and markdown on GitHub, you're gonna have a great time.
[00:21:18] Cian Clarke: If you've come of age in the world of, oh, I don't know, Microsoft Word documents and diagrams scaffolded using those constructs in XML, which is ultimately what the Word format bakes down to, probably not gonna have as good of a time. I found that the kind of modern way that developers architect their documentation models are just really good at reading, and that's a really nice coincidence.
[00:21:45] Simon Maple: Yeah, yeah. No, absolutely. And we kind of mentioned there a little bit about maintenance, I guess. We're gonna have the same issue with specs, right? How do we make sure that specifications don't become stale, that developers are continually upgrading the spec, and it doesn't get out of sync?
[00:22:04] Simon Maple: I guess when we think about BMAD specifications, the BMAD method, how do we make sure that this isn't turning into a maintenance hell and that we don't get stale documentation that just sits there, nobody reads it, similar to kinda like what we saw the PRD become?
[00:22:21] Cian Clarke: I think there's this continuum, isn't there?
[00:22:23] Cian Clarke: In the world of spec driven, between spec as source as the kind of the first term, you know, the spec as the baseline artifact that the code is generated off, such that you throw away the spec and get the exact same code artifact or very similar code artifact as a result when you regenerate your code.
[00:22:42] Cian Clarke: Versus a more kind of spec first driven methodology where the specs are essentially, you know, they last for the life cycle of the epic or the feature or the task, and then get thrown away. And so, you know, tied back to that is this world of maintaining the documentation. And I feel like as the frameworks evolve, we will end up more in a world of spec as source, where the really important artifact is the spec.
[00:23:06] Cian Clarke: You know, cascading all the way back to the product requirements document. But in the immediate term, we are definitely seeing some degree of, like, documentation rot and documentation going out of state.
[00:23:18] Simon Maple: Or
[00:23:19] Cian Clarke: Out of sync, a little bit stale. In BMAD specifically, I've seen this solved for a little bit through the ability to inject into the backlog new work items that you know, might alter the course of development.
[00:23:33] Cian Clarke: And within that work log item in the backlog, you will also have a specific spec for the story that talks a little bit about the detailed implementation of that story and what we expect from it. Does that cascade back neatly to the PRD and architecture document if needed?
[00:23:49] Cian Clarke: Sometimes, but not always. Yeah. And that just feels like a direction of travel that eventually, you know, PRD type requirements documentation will wind up also being kept in sync. Feels like that's the way that the tooling is headed, but we're certainly not there yet.
[00:24:05] Simon Maple: Yeah. And it's interesting when we talk about it, because there's, like, you know, you talk about tooling there, and that's the way the tooling is heading. And I guess there's, like any major kind of change to our industry, there's different things that need to change.
[00:24:21] Simon Maple: There's tooling, there's practices and process, there's the human cultural side. And, like, DevOps is a great example of people sometimes initially over rotating on the tooling, but actually coming back to realize this is a team and process. There's cultural change that we need to address first.
[00:24:40] Simon Maple: I guess BMAD, let's ask the same question of BMAD. Is this fundamentally a technological problem, or is this a cultural practice that we need to change?
[00:24:47] Cian Clarke: I think it's a mixture of both, but it sort of starts out in the world as a, as a, a change in culture.
[00:24:52] Cian Clarke: So what it sort of does is it forces you upfront to expose a lot of the assumptions that might exist within a software project.
[00:25:00] Cian Clarke: Because, you know, as we discussed earlier, if you're unearthing these discoveries and these assumptions as development is ongoing, you know, results may vary. And so it's very important to have a well aligned piece of documentation with all of the stakeholders that talks about what exactly it is that you're building.
[00:25:18] Cian Clarke: And to me, that's a pure cultural change. There are also kind of technical practice changes coming about as a result of the use of many of these spec driven tools, BMAD included. In particular, probably the worst technical change that I've seen is the transition from teams to sort of the sole contributor. And that's one of the things we are most focused on trying to solve for.
[00:25:39] Cian Clarke: How do you scale a spec driven framework to an entire team of developers rather than an individual contributor that is taking on the role of DevOps plus front end engineer plus backend engineer, plus, you know, the list goes on.
[00:25:58] Cian Clarke: I think that's almost not a great way to be building within the enterprise, and we need to better solve for that collaboration piece. And although right now the challenges are purely technical, it feels like ultimately it impacts the culture, because you're expecting an individual contributor to be able to wear all these hats.
[00:26:19] Cian Clarke: The limitation is purely technical. It is that right now we are mixing the work of the backend and the front end engineer in our decomposed task lists that are output of a lot of these spec driven tools, assuming full stack engineering. But in reality, we could be aligning these tasks, individual, individual roles and have people contribute.
[00:26:41] Cian Clarke: So I am optimistic of the future for this sort of more collaborative culture that spec driven will bring. But the tooling right now just needs to catch up.
[00:26:50] Simon Maple: Yeah. I love, I love the phrase you used there as well, where it's, it's like, I think you, I think you mentioned forcing function, which I think, you know, problems with a culture change in terms of the way we go about building software or, or, or doing anything in, in the space of tech, you, you often have all the desire you want in trying to make that change, but ultimately it's the tool tooling, which is the forcing function that encourages and ensures people are actually taking those steps.
[00:27:15] Simon Maple: And I think it's the way, it's the tooling that helps train us. To make those cultural kind of steps to, to, to ensure we actually follow that. And over time, yeah, we probably do, we'll probably do that without even thinking, but it's that transition whereby the tooling can really help teams grow fast.
[00:27:36] Simon Maple: And I think if we, I, I really like what you're talking about with the collaboration and the teams, and I guess I'd love to ask a couple of questions. First of all, maybe, go into more depth on specs and then I'd, I'd love to kinda like, understand at what point does that, does that make it too heavy weight?
[00:27:50] Simon Maple: So, so why don't we start with specs. This is a cultural change to actually get people to sit down and write those specifications and think out what they need to put into those specifications. I guess at what depth do we need detail about every aspect of a project, whereby that's gonna, maybe we overthink things or actually.
[00:28:12] Simon Maple: Starts slowing us down versus, versus helping us. Where's the, where's the right level for, for detail, I guess, in those specifications?
[00:28:20] Cian Clarke: Yeah, it's, it's interesting because I think in the world of startups, you know, the, the, the idea of overly verbosely specifying what it is that you want to build is, you know, almost, sort of, a bad word, right?
[00:28:31] Cian Clarke: The super detailed spec can slow teams down. I feel like the right sized spec problem is another challenge with the tooling right now. We talked earlier a little bit about rapid prototyping not being a great fit for vibe coding. Bug fixing not being a great fit for vibe coding.
[00:28:53] Cian Clarke: Right now, it feels like the scale of the specs that frameworks like BMAD produce can be quite excessively verbose and heavy. That's one of the things that we're looking to try and trim. Because not only are we looking to right size the specs for human context, but we're also limited in how much the model can comprehend. And the quality of outputs will go down the more and more of the context window we take up with just specs, right?
[00:29:21] Cian Clarke: So that right sized question is probably one of the most important things for these frameworks to get right. It feels to me like BMAD is the closest approximation of that I've seen to date.
[00:29:31] Cian Clarke: But it's not perfect. It does some really clever things around very lengthy documents like PRDs and architecture documents called sharding.
[00:29:41] Cian Clarke: Which is super clever. So what it will do is split up our requirements documents, such as the functional requirements per epic are split, and then architecture documents such that components are split, such as front end and backend.
[00:29:53] Cian Clarke: And only subsets of those sharded documents are included in the context window when interacting with the model. And it's some of the cleverness like that that makes me think this is the best approximation right now of what spec driven looks like and providing the best possible honed context to a model.
[00:30:10] Cian Clarke: Yeah. In theory, every line of spec helps the model, and it's really just about optimizing for context window. But in reality, that can become excessive. So, like, it's one of the most important things, I think, to optimize for the length of a spec and, you know, how big can it be before it starts hurting rather than being helpful.
[00:30:27] Cian Clarke: That's where these frameworks I think are gonna longer term help us.
[00:30:31] Simon Maple: Yeah. And, oh gosh, this resonates so, so much. I'm literally doing something right now with a whole bunch of spec style documentation, markdown files, that are kind of like sharing core security practices and OS practices.
[00:30:49] Simon Maple: And we're talking like 80 different files of, you know, policy best practices and policies of how to write secure software. Now, if we were to throw all of that directly at an LLM and say, right, do a code review here, there's just too much context for it to kind of hold and say, yeah, I'm gonna do all of this and I'll do it very, very carefully.
[00:31:12] Simon Maple: But the more I'm thinking about it and, and the way I'm trying to, the way I'm approaching this, maybe even as a code review, is to perhaps say, well, let's actually, let's actually try and target various flows, whereby I'm, I'm, sharding is a great way of saying it actually. I'm, I'm looking at very specific areas.
[00:31:28] Simon Maple: Maybe you start off with authentication. Let's do a code review for authentication across this massive app, because otherwise you've got so much context and so much code that you're trying to look at, it's just not gonna do a good job. And I don't think
[00:31:42] Cian Clarke: Really interesting to see how much, so right now, a lot of the logic for how this
[00:31:47] Cian Clarke: is sharded as sat in a framework called BMAD, but if you go up the chain, right, so like if you look at the IDE and how it selects the most relevant piece of context to send to the foundation model, then you go up the chain again and you look at the foundation model as it reasons while completing a coding task and decide which bits of context to pull.
[00:32:07] Cian Clarke: How much we can align on standardized specs that we're using to develop. You know, we have some constructs now, like agents.md. But then we get into, you know, do we use doc cursor rules or a project constitution or a Claude.md. Right now, it's a little bit of the wild west. It was nice to see the MCP get released into an open source foundation, but there's a lot more to be done on standardizing on this sort of base building blocks of specs across model vendors and across IDEs.
[00:32:31] Cian Clarke: And it's sort of exciting to me to see how much of the techniques and terminology that we use in the framework land right now is gonna eventually cascade down into foundation model vendors and also IDE tool vendors.
[00:32:51] Cian Clarke: So yeah, you know, shout out to the vendors of tools here. Let's get a bit of standardization going. The more, the better, right?
[00:32:58] Simon Maple: Absolutely. And I think that's the key as well because you'll get, you'll get people like using, whether it's IDEs or agents across a company, even wanting the same, the same style.
[00:33:07] Simon Maple: And that agent, like the agent agnostic approach is gonna be, is gonna be so, so important for, for, for this. So, so, so you mentioned about startups, I guess the startup culture of really valuing speed and delivery and, and experimentation as well. I, is this the wrong approach then for, for a startup, or do you feel like there's a, there's a path where, for, where BMAD can be used there as well?
[00:33:31] Cian Clarke: It’s another question. Where on the spectrum, does this belong within a startup as well? Yeah. I think the, the, the sort of, you know, the world of the, the vibe coder is gonna find this very frustrating, right? Because it's going to slow them down. And the idea of having gone from seeing outputs in 20 minutes to an hour, to seeing outputs at the end of a five, six hour documentation session will be inherently.
[00:33:54] Cian Clarke: Frustrating to somebody who's used to, in a, in a really low diligence way, just prompting their way to a really cool application.
[00:34:04] Cian Clarke: But for me at least, the penny drop moment was when the quality reflected the fact that, you know, that slight increase in effort was put in. And so, you know, when I was able to build something in, let's say, you know, two days of spec authoring and iterating through a backlog
[00:34:20] Cian Clarke: That I felt I understood, and it felt like an artifact that would've taken me a month to produce otherwise, and was probably better code than I would've written. You know, it's, that's when the penny really dropped for me. And a repository with linted code that had, you know, 2, 3, 400 unit tests, automation tests, and architecturally separate front end and backend infrastructure as code, like just real wow factor.
[00:34:46] Cian Clarke: And I think that any startup that isn't leveraging this build methodology is going to find it very hard to compete in a world where, you know, the vast majority of, for example, Y Combinator startups are using AI tooling to build much, if not all, of their code. A statistic I don't want to misquote, but some incredibly high percentage of startups in Y Combinator are using tools like this.
[00:35:15] Cian Clarke: It just gives that sort of competitive edge, and the world of building product just looks so different now, because anybody can show up with a vibe coded prototype and show something shiny to investors, but actually transitioning it into something that would scale to the first even 500 or a thousand users is a very different question.
[00:35:35] Cian Clarke: And I feel like spec driven is the methodology for doing this.
[00:35:41] Simon Maple: And I think it's interesting. When I see spec driven approaches and I see kind of more prompt driven approaches, you almost start looking at a prompt driven approach being a very prototype style way of doing things.
[00:35:53] Simon Maple: I've even known someone to start with a prompt driven approach, and they send their kinda like Claude logs or whatever it is over and say, look, build me a spec. Based on these prompts, you know, from my prompts where my intents are, you can see from the code that's been generated what has been built, what I'm happy with. Build a spec that represents this.
[00:36:11] Simon Maple: And I guess, like, do you feel like that if you were to go fully vibe in prompts without building a spec, does that feel like a strong limiter for people who actually want to go much further, much closer to production, those who want to scale AI within their organizations?
[00:36:31] Cian Clarke: As you've probably gathered, that's kind of the framework of how I think of things that have purely been prompted. That's not to say that you couldn't construct a perfect prompt that actually encapsulates all of the context from your specs, just perfectly right sizes your context window such that it's built, you know, task by task, a bunch of different smaller stories in very well crafted prompts.
[00:36:56] Cian Clarke: That's definitely possible, but the methodology of spec driven with the combination of a framework makes you more likely to be successful. Yeah.
[00:37:05] Cian Clarke: What's also interesting is, like, is spec driven what the future of AI native engineering looks like? Like, I'm not actually that dogged on the fact that it is. I'm not completely convinced.
[00:37:16] Cian Clarke: It feels as though Cursor in particular has been slow to adopt a lot of the primitives and instead has gone through their recent release with plan mode, which feels like a spec driven compatible way of operation, but it's not exactly that. So, like, you know, it feels like the best approximation right now of how to work with these models, this spec driven methodology and also the BMAD framework itself.
[00:37:42] Cian Clarke: But that's not to say that, you know, next year that's what things are gonna look like. If I've learned one thing in the whirlwind that's been the last nine months, it is that my crystal ball is inherently faulty, and who knows?
[00:37:59] Simon Maple: Yeah. The crystal ball, I saw an eight ball the other day where it's an LLM implemented eight ball, which is like, yeah, everything's going in the future.
[00:38:11] Cian Clarke: It’s a crystal ball, it will repeatedly say.
[00:38:13] Simon Maple: You're absolutely, you're absolutely right. So, when we, when we think about how LLMs, you know, typically are, are kind of like better with the, the more structured specs, with, with kinda like clear requirements.
[00:38:27] Simon Maple: When we think about those specifications or those systems that are poorly specified, what do they become? Do they become. Something which just, you know, holds technical debt or are they now a liability? Because, because of the way an LLM is gonna, is, is essentially gonna implement that and, and, and make assumptions.
[00:38:44] Cian Clarke: Yeah. It's an interesting kind of question, isn't it? Like, what becomes of technical debt? Do we have this new thing that is spec debt or requirements debt or inadequate requirements? Because when you think about tech debt, it surely should at least fade into the background, because it's now easier to tackle backlogs of tech debt through agentic coding, right?
[00:39:09] Cian Clarke: You know, the, the sort of drudgery of, well, actually it doesn't feel like drudgery sometimes. It's really rewarding, but, but the, the art of like eliminating tech debt is now much easier to, to go after with an agentic coding system typically. So, you know, does that become really important, when you can solve for tech debt much easier?
[00:39:27] Cian Clarke: And in fact, it's actually something that's not really aligned to what you wanted to build. That becomes a, you know, the problem. So like poorly defined requirements. Does it become harder after going through, you know, three, four days worth of, of spec driven development where you suddenly have this 20, 30,000 line repository, is it actually harder to recover from the fact that you missed some requirements along the way?
[00:39:49] Cian Clarke: Yeah. And is there flexibility sufficient in the workflow to be able to make up for that? Certainly some of the techniques I've tried have been things like injecting new backlog items to the end of a backlog to try and recover and still following the spec driven methodology rather than bailing to vibe coding, which is definitely a temptation.
[00:40:06] Cian Clarke: But following the methodology to steer towards the desired output. But yeah, it's definitely tricky. The veracity as well in that kind of requirements documentation artifact. I know the Kiro team are doing loads of really interesting research at the moment on what makes a, like really good, well defined, structured requirements document through, I think, the use of formal methods to ensure that the requirements are well formed, almost complete.
[00:40:30] Cian Clarke: And that they complete, and that the gaps are actually identifiable by the model as well.
[00:40:48] Simon Maple: Yeah. And I guess, you know, the goal here of BMAD, when we think about agents as they are today, agents are becoming more and more autonomous. Are we trying to make them more autonomous through things like specifications and through BMAD methods? Because that allows us to create a, a more detailed brain dump of what we are thinking.
[00:41:15] Simon Maple: Yeah. Or is it actually, are we still aiming for that, very assisted back and forth chat style flow, whereby it just makes it safer because we are giving it the guardrails it needs. Is there a, is there a kind of like a North Star that we're, we're trying to get to or are both very applicable?
[00:41:33] Cian Clarke: Yeah, I think it's a bit of both.
[00:41:35] Cian Clarke: I need to get off my fence, don't I?
[00:41:38] Simon Maple: If you was to choose one though, Cian.
[00:41:41] Cian Clarke: I refuse. Backlog decomposition for me feels like the place where there's a little bit of room now to speed up with the sort of safety and guardrails we've injected into some of these frameworks.
[00:41:54] Cian Clarke: So like the iteration of an architecture document and a requirements document, that feels like a pretty important stage to still have a human in the loop.
[00:42:02] Cian Clarke: But when you've decomposed a backlog and reordered it to your heart's content, and you feel like the, the stories that a, an agent is going to go after one by one with a new context window for each one
[00:42:12] Cian Clarke: is in good shape. I'm not sure that me sitting there and trusting every single command that the model suggests I run is necessarily a great use of my time when I can, you know, run the agent in a sandbox and force it between each story to get, commit, get, push. And if it decides to rimraf/* that's okay by me.
[00:42:30] Cian Clarke: 'cause I have everything in Git and it's in a container and who cares, right? You know, you see all the memes of, of folk vibe coding and seeing it rimraf/* their, their entire directory, or -rf, yeah, the, that becomes less of a problem when you've got sufficient guardrails and you're operating in a nice sandbox VM, and so on.
[00:42:48] Cian Clarke: And so it just feels like sat there iterating through every single story, maybe less useful. I'd love to be able to accelerate through a bunch of stories and just review the output of epics, maybe. And so you get towards more of that autonomy. And when it comes to, you know, as I said, that earlier phase of actually writing the specs, authoring the specs, aligning on the specs, I feel like there's less room for autonomy right now.
[00:43:12] Cian Clarke: But if we were able to also make that highly autonomous, like where do we end up? Are we actually back to vibe coding somehow? Yeah. I don't know.
[00:43:21] Simon Maple: It kinda like relies a lot on trust, right? Because if we don't have trust, we just plain don't have autonomy, because the trust is that piece in the culture that allows us to allow the AI agents to stretch those legs and, you know, run as fast again.
[00:43:35] Simon Maple: Right? It's like yeah,
[00:43:37] Cian Clarke: Enable you to go after CI/CD and, and then, yeah,
[00:43:40] Simon Maple: Yeah, yeah. So does BMAD, I guess, change that model then when we think about trust in terms of, you know, does it provide us with more trust? And maybe we kinda like lean a little bit more into, I guess, quality, the quality of AI output, and how specifications can provide that greater quality because
[00:44:00] Simon Maple: It's providing those guardrails, and as a result, knowing if those guardrails can be validated, whether there's automation in the validation and things like that, so that, you know, things can be raised to us to say, no, this is actually going against the things that we've said in the spec. We not only recognize there's higher quality here, but we then have greater trust.
[00:44:20] Simon Maple: Is that, is that, is that a path where we can, you know, raise the autonomy bar?
[00:44:24] Cian Clarke: Yeah, I think that that is exactly the lever that we need to be able to make the transition in autonomy, being able to trust the outputs of the agent. And so, like, if in the short term it looks like, you know, on a per context window, per story basis, we're reviewing outputs, so be it.
[00:44:40] Cian Clarke: But the more that the sort of quality metric is pushed further to the right the more that we can afford that autonomy and the more that we can afford to accelerate. And so I think what will get very interesting is. As we transition, as models are writing more and more tests to cover what it is that they've done, it's funny, you know, the, the same tricks reign through like TDD and so on.
[00:45:04] Cian Clarke: The, the more that models are able to actually test their outputs and prove that what they claim is working is, is in fact truth, through a completely separate artifact to it, just interpreting the code. The more that we can trust that autonomy, and it's like, it's the traditional software engineering techniques that we've always used to build trust in a, in an artifact and build trust in the deployment of an artifact just being used by by agents like I do sometimes wonder is there a, a world where.
[00:45:34] Cian Clarke: There's some new constructs that, that models use to test the validity of code that we don't traditionally do in a software development life cycle. That can be, you know, that can be used here. Maybe that's where the research goes. I don't know the, the, the Kiro folks, again, shout out to, to them, have, have probably done the most in this space with their, oh, what do they call them?
[00:45:55] Cian Clarke: Checks, I think. But that they have a new, you know, testing methodology that checks for completeness in a, in a way that's not TDD and is not BDD, but is something different.
[00:46:05] Simon Maple: Yeah. And it's funny when we talk about TDD, when, like, when we think about specs, and I remember in the early days of Tessl when we were, when we were looking at what goes into a spec and, and how should it be, how should it be phrased?
[00:46:15] Simon Maple: What I, what I found like immediately very interesting was how, if you just to write down a capability in a specification, you know, you can think about that in two ways. You can think about that as this is something I want the application to do, but also just slightly written differently. It can be written as a test.
[00:46:34] Simon Maple: You can say, when, when this happens, I want you to provide this back. And that, you know, it actually turns it into just a slight rephrasing, turns it into actually a, a test driven, you know, test test, like. Test driven development style approach, right? Actually your specification is, is a list of, is a list of things that you want to be true.
[00:46:54] Simon Maple: Essentially test cases. So I guess, you know, what came first, the chicken or the egg what, what, what should come first in the spec? The, the, the code or rather the capability or the test.
[00:47:05] Cian Clarke: That re requirements writing style, EARS, is that what it's called? EARS requirements is setting you up quite nicely for a specification of what a test to subsequently validate that requirement could be.
[00:47:20] Cian Clarke: And so it feels like that is vastly more important than. I don't know, you know, you're dumped from your brain brainstorming session earlier on. So actually getting a succinct statement of what that requirement is and being able to use it as a testable artifact. Highly valuable. Yeah. You know, maybe, maybe, we'll, we'll be in a world where a playwright automation test is written to validate a piece of functionality purely off of.
[00:47:43] Cian Clarke: Observation of the produced interface plus that statement with no insight into the code. The code is a black box. And the methodology is, here's the interface, here's the requirement to prove that this actually works. Model off you go, you know? Yeah. And it's just a LLM as a judge.
[00:47:56] Simon Maple: LLM as a judge style, just trying to look at what's been created.
[00:48:01] Simon Maple: Yeah, absolutely. And I think actually there's been, there are several papers that actually show that is a valid, that is a very valid way of, of, you know, running evals and, and actually very, very accurate as well. Yeah. Cool. I guess why don't we wrap up with kinda like, a little bit more on some of the, some of the things that Nearform have done.
[00:48:18] Simon Maple: I guess. You know, you've run a number of experiments, certainly around, BMAD and SDD and I think you're from talking with a, you know, a large number of people. I think you're probably from, from a consultancy point of view, definitely. The, the, the, the group that have, you know, jumped in most with this kind of thing.
[00:48:37] Simon Maple: So I'd love to kind of hear what are, what are some of the most interesting experiments, I guess, that, that, that you've, that you've played around with?
[00:48:45] Cian Clarke: Yeah, we've definitely gone helpful leather. And so I suppose some of the kind of challenges that we found in rolling out a framework like BMAD to a team of people, since that's how we tend to operate, have forced us to sort of experiment a little bit with, you know, how we overcome that.
[00:49:01] Cian Clarke: And so I've done a little bit of spiking on how to codify some of the roles of our team within BMAD. A colleague of mine, Lucas, shout out to Lucas, doing incredible things at the moment, enforcing the backlog generated from BMAD to be able to sync with an issue management system like GitHub Issues.
[00:49:19] Cian Clarke: And also being able to orchestrate these workflows, not just locally on a developer's machine, but through something like a GitHub Actions workflow. So a super cool way of running spec driven development tasks through a more shared interface where the sort of work log is visible by all members of the team through a web browser rather than an IDE.
[00:49:40] Cian Clarke: Helps with that kind of going parallel as well. And that's been most of our experimentation throughout the latter half of 2025. And so what we're now toying with as a business is, do we just, you know, embrace chaos and let, you know, a bunch of really clever people drive their own experiments, which, yes, absolutely, is the open source way.
[00:50:03] Cian Clarke: Yeah. In 2026, you know, the hope is that we'll be able to harvest from a bunch of these really clever ideas a more kind of unified Nearform way of delivering at scale, overcoming some of the challenges that we have with the core framework. You know, does that wind up a module within BMAD? Do we roll our own framework?
[00:50:22] Cian Clarke: Do we contribute back to BMAD? You know, that very much remains up for grabs. Yeah. But it feels like an area where we're going to need to contribute something. You know, the hope would be something in the open source domain, to try to solve, in particular, if we're going parallel across multiple teams, that that's been the big area of focus.
[00:50:43] Simon Maple: Super interesting. And, and to be honest, I love, I love working with you guys and I, I can't say enough nice things about, about you and the team for, for folks who wanna learn more about, or first of all, keep up to date with, you know, some of the great things that Nearform are doing or, or, you know, reach out to work with you.
[00:50:59] Simon Maple: What, what, what's the best ways of getting in contact and learning more about Nearform?
[00:51:04] Cian Clarke: Yeah, absolutely. I mean, a great place to start is nearform.com. The jobs board is up there. In terms of what we've been experimenting with, spec driven, I'm the worst on our blog. I think I've written more on your blog than I have on my own at the moment.
[00:51:17] Cian Clarke: But, you know, shameless plug, come follow me on LinkedIn. I tend to, you know, post fairly techy stuff on the world of spec driven development pretty sporadically. Our own Nearform Insights account on LinkedIn also posts a bunch of great content. A couple of my colleagues post a lot, retweeted by the main account.
[00:51:40] Cian Clarke: In the world of spec driven. And so we're a pretty active bunch on LinkedIn, and that tends to be where we talk the most about our experiments and spec driven and what's working and also what's not working. You know, we're a pretty open book when things don't go well as well.
[00:51:54] Simon Maple: Yeah. Yeah. Sounds amazing. Well, Cian, I think we've got about two to three hours of content in somehow around 50. Just over 50 minutes. So,we, we did well, we did well Cian.
[00:52:05] Cian Clarke: Thanks so much for having me, Simon.
[00:52:06] Simon Maple: Oh, it's been an absolute pleasure. Cian, I really enjoyed the conversation and thank you, thank you very much for bringing, you know, all of your and Nearform’s expertise and experience to share with our listeners.
[00:52:16] Simon Maple: So appreciate it. Thank you very much.
[00:52:17] Cian Clarke: Absolutely. I'm sure we'll chat again next time I'm over in London, which I have no doubt will be sometime in January.
[00:52:22] Simon Maple: Oh, wonderful. We'll see you there. And yeah, I hope everyone enjoyed that episode. Tell us what you thought and speak to you soon and see you in the next, well tune into the next episode rather.
[00:52:33] Simon Maple: Thank you very much. Bye for now.
In this episode, host Simon Maple chats with Cian Clarke from NearForm about BMAD—an open-source, spec-driven framework for AI-native software development. Discover how BMAD transforms AI from demo tools to practical solutions by centering development on rich specifications and structured context, leading to more consistent outputs and reduced ambiguity. Learn practical tips for adopting a spec-first approach to ship software that reflects intentional design.
AI-native development grows up in this episode as host Simon Maple sits down with NearForm’s Head of AI, Cian Clarke, to unpack BMAD—Build More Architect Dream—an open-source, spec-driven workflow named after its creator, Brian Madson. The conversation moves from why “vibe coding” with generative models needs guardrails, to how spec-first practices and better “context engineering” can make AI actually ship software, not just demos. Cian walks through NearForm’s adoption journey, what BMAD v6 changes, and practical advice for developers picking the right tool for the job.
BMAD is an open-source framework for spec-driven software delivery with AI. It’s model-vendor agnostic and IDE-agnostic, which appealed to NearForm given its open-source pedigree (Fastify, Node.js core). Rather than relying on one-shot prompts or loose interactive sessions, BMAD centers the process around rich specifications—artifacts that represent the “what” and the “why” before AI generates the “how.” In Cian’s words, it’s really intelligent context engineering for software projects.
The v6 release is a notable evolution: teams can now select the scale of the project they’re attempting (e.g., prototype vs MVP vs more ambitious builds), and a new module system lets you “bring your own roles” to shape how the model behaves (think architect, tech lead, or QA sensibilities encoded as roles). These capabilities transform BMAD from a clever demo tool into a more comprehensive, team-friendly workflow so you can steer the AI with domain language, constraints, and team norms.
At its core, BMAD treats specs as living, executable context. Instead of letting the model invent missing details, teams make those trade-offs explicit up front. The payoff is more consistent outputs, fewer rewrites, and a codebase that reflects intentional design, not the stochastic creativity of a coding model left to fill in the blanks.
Vibe coding is great for quick explorations, but ambiguity in requirements becomes expensive—and fast—when models are generating whole systems. Humans will pause to ask clarifying questions when requirements are fuzzy; models can and will “make something up.” That manifests as rework, wasted tokens, and code smell as ad hoc choices ossify in the repo. What used to be a nuisance in traditional development becomes magnified with AI, because the model acts decisively on incomplete direction.
BMAD (and spec-driven practices generally) attack this ambiguity tax by grounding the model in clear constraints, behaviors, and desired outcomes before code generation begins. It separates the “what” (user outcomes, domain rules, constraints, business levers) from the “how” (tech choices, libraries, patterns), and encodes both for the model in structured docs. This is akin to BDD-era rigor but adapted for generative coding models: not a monolithic prompt, but a set of artifacts the system continuously references.
The result isn’t just better code—it’s reduced uncertainty and better alignment with stakeholders. When the system does need to make novel decisions, those are guided by explicit roles and constraints rather than the model’s guesswork. That keeps the repo cleaner and the iteration loop tighter.
NearForm’s journey to BMAD flowed naturally from its consulting practice. Their Ignite discovery sessions align stakeholders on goals, constraints, and measures of success. NearForm started capturing these conversations with AI note-takers, then distilled them into playback decks, requirement docs, and backlogs. That artifact pipeline became the perfect feedstock for BMAD, turning discussion into the structured context a model can execute against.
This mirrors well-known product practices like Amazon’s PR/FAQ and “squad mobbing”—writing the future press release and FAQ to surface requirements early. With BMAD, those artifacts don’t just live in Notion—they guide the generation of the software itself. The documentation evolves into a control surface for the model, reducing randomness and allowing teams to “tune” behavior via updated specs and role modules.
NearForm also cross-pollinates with other AI-native dev tools. Kiro often sits alongside BMAD as a strong contender for AI-accelerated delivery, and the team explores tools like Tessl for discovery-in-context and agent enablement. The key is picking tools that amplify a spec-driven workflow rather than encouraging ad hoc coding.
BMAD has become the tool of choice at NearForm for AI-native development of greenfield MVPs and shippable systems. When your goal is a real product—something you’ll keep, maintain, and evolve—BMAD’s upfront structure pays dividends. It’s also the most capable approach Cian’s team has found for brownfield work, though legacy code always introduces friction. Even then, having roles and modules that express your architectural intent can guide the model through complex repos more credibly than freeform prompting.
Historically, BMAD’s process felt heavy for rapid prototyping or throwaway experiments. V6 addresses this with the ability to select a smaller project scale and simplified workflows, but NearForm still reaches for tools like Bolt.new when speed-to-demo matters more than longevity. The pragmatic guidance is simple: if it’s a spike you’ll throw away, go lightweight; if it’s an MVP or production-bound artifact, invest in specs and BMAD.
Practically, adopting BMAD looks like this: run a discovery session (Ignite-style) to align stakeholders and document constraints, distill that into specs and a prioritised backlog, choose your project scale in BMAD, define role modules that reflect your team’s architectural and quality preferences, and let the tool orchestrate model output. Iterate by refining the spec and roles—not by patching random code the model produced.
Cian frames all of this through developer empathy: what parts of the job did we hate when we were on the keyboard? Think CloudFormation stacks rolling back repeatedly, flaky UI tests exploding because a button shifted ten pixels, or documentation rotting as code shifts. AI should target those friction points first—automating drudge work and eliminating toil—so humans focus on domain logic, system design, and quality.
Spec-driven workflows like BMAD align well with this ethos. They reduce the need for developers to reverse-engineer intent from autogenerated code and instead place intent at the center. Even tasks like documentation become byproducts of the same artifacts steering development. And because BMAD is open source, model-agnostic, and IDE-agnostic, teams keep control over their stack, vendor choices, and code ownership—key concerns for developer trust.
The broader NearForm perspective is to care not just about what you build with AI, but how you build it. Tooling choices (BMAD, Kiro, Bolt.new) should be made in service of predictable delivery, developer happiness, and sustainable codebases. That’s how AI moves from novelty to a dependable part of the software supply chain.
Developers building AI-native applications can apply BMAD today to turn discovery into executable context, reduce ambiguity, and ship software that reflects intent—faster and with fewer surprises.

Next Wave?
Disposable
Software
14 Oct 2025
with Alex Gavrilescu

UNLOCKING
SPEC-DRIVEN DEV
with Tessl
16 Sept 2025
with Guy Podjarny, Simon Maple

SPEC-DRIVEN
DEVELOPMENT
WITH KIRO
19 Aug 2025
with Nikhil Swaminathan, Richard Threlkeld