Podcast

Podcast

AI's Disruptive Force: Insights from Alex Komoroske

With

Alex Komoroske

9 Apr 2025

Episode Description

In this blog post, Alex Komoroske shares his expertise on AI's transformative impact on technology and creativity. As a seasoned web platform developer, Alex provides a comprehensive look into how AI is reshaping the tech landscape. From enhancing development efficiency to fostering creativity and personalizing user experiences, explore AI's disruptive potential and its importance for developers.

Overview

Introduction

In this episode of AI Native Dev, Alex Komoroske explores the profound influence of AI on development and creativity. As a seasoned expert in web platform development, he offers valuable insights into how AI is redefining technology and fostering innovation. His discussion centers on AI's role as a disruptive force in the tech industry and its potential to enhance human creativity by streamlining mundane tasks.

The Disruptive Nature of AI

Alex argues that AI is not just a temporary trend but a disruptive innovation requiring a paradigm shift in perception and application. "AI is not some random technology; it's a fundamental disruptive innovation," he emphasizes. This distinction is vital for understanding AI's true impact on technology.

AI's Role in Development Spectrum

The conversation delves into AI's capacity to assist developers across various domains, from micro apps to large enterprise software. Alex highlights how AI, particularly Large Language Models (LLMs), can augment human capabilities, enhancing development efficiency and scope. AI's ability to interpret and generate code snippets enables developers to tackle complex challenges more effectively.

AI as a Tool for Creativity

Alex expresses enthusiasm for AI's ability to boost human creativity by automating routine tasks. This automation allows developers to concentrate on more innovative work, fostering a creative and efficient development environment. AI infrastructure, by streamlining processes, empowers developers to focus on high-value tasks.

Decentralization and Centralization Dynamics

The discussion touches on the balance between decentralization and centralization in technology. Alex likens this dynamic to a "tick tock" effect, where AI might shift these dynamics, enabling more personalized and custom user experiences. This transformation could lead to more dynamic and adaptable ecosystems.

Challenges and Opportunities in AI Ecosystems

Alex explores AI's potential to offer personalized experiences without relying on centralized corporate changes. This flexibility could result in ecosystems similar to browser extensions, where users can customize their experiences. Such adaptability is crucial for fostering innovation and user satisfaction.

Testing and AI's Role in Development Efficiency

Alex addresses the challenges of writing tests in development, noting how AI could mitigate these burdens by integrating testing more seamlessly into the development process. This integration would reduce development bottlenecks and improve overall efficiency, allowing developers to focus on innovation.

Conclusion

In summary, Alex Komoroske's insights underscore AI's transformative potential in reshaping development practices and enhancing creativity. Key takeaways include AI's role as a disruptive innovation with vast potential, its ability to broaden the development spectrum, opportunities for personalization, and its importance in streamlining processes. As AI evolves, its impact on technology and creativity will likely expand, offering new opportunities for developers to innovate and thrive.

Chapters

0:00 - Episode highlight: AI's disruptive potential

1:00 - Intro to Alex Komoroske and his expertise

5:00 - Discussion on AI as a disruptive innovation

10:00 - AI's role in enhancing development efficiency

15:00 - Creativity and automation in AI

20:00 - Decentralization vs. centralization dynamics

25:00 - Personalization opportunities in AI ecosystems

28:00 - Challenges in writing tests and AI's role

Full Script

Alex Komoroske: [00:00:00] LLMs are really good at writing crappy software on demand for basically free. So like it's not good software often unless you have the right structures and approaches and the right kind of scaffolding to make sure that it's doing the thing that you wanted, like what you all are building and that these radically change how we think of where you would apply software and how you would apply it.

Simon Maple: You are listening to the AI Native Dev brought to you by Tessl.

Hello, Simon Maple here and welcome to another episode of the AI Native Dev. We have a special one for you this time around as we reshare an AI Native Dev Con session, the one where Dion has a chat with Alex Komoroske about the AI developer ecosystem. Now, one of the reasons why we want to share this session is because we actually have a new AI Native Dev Con event fast approaching on May 13th, 2025.

It's [00:01:00] our spring edition, so yet we'll be also having a full edition later in November, 2025. At the time of recording the CFP, the call for papers and the registration are both open. So head to ainativedev.io/events where you can submit your proposals to speak at the event. As well as of course snag your ticket and your spot for the event as well.

It's free to join us. So with that, be sure to subscribe and like the video if you're watching it on YouTube. And I'll hand over to Dion and Alex.

Dion Almaer: Hey, how's it going Alex?

Alex Komoroske: Good, how are you? Thanks for having me.

Dion Almaer: Yeah, pretty good. Thanks so much for, for joining us. Uh, you know, a little early your time. Whereabouts are you?

Alex Komoroske: I am in Berkeley.

Dion Almaer: Oh, very nice. That's good. That's good. So yeah how did Bits and Bobs come about? Actually, was I right on that?

Alex Komoroske: Yeah, it's so I have this, the compendium, which is that sort of custom web app I built over the last number of years to organize my thoughts.

And if you look at the compendium live, you'll think that it's dead. I haven't added anything to it for [00:02:00] years. I actually add, have it open every day, and I have, I add hundreds of notes to it every day in my private working notes area. I use embeddings to help me find commonalities across different ideas.

And then each week I go and spend a few hours on the weekends when the kids are napping and I distill out the notes from that week that resonate with me. And I developed them a little bit. And it started off actually as a believe it or not, as a practice. A very early version of it was when I used to report to Clay Bavor, who recently founded Sierra with Bret Taylor. I would send an email on Monday to him of here's the people I'm meeting with, here's the things that, I'm doing, and here's a few takeaways, like interesting ideas I think you might find relevant. And then I continued that practice at Stripe.

And then I kind of that section got deeper and deeper and longer and longer. And I started sharing it with more and more people in my internal little strategy study groups and then into internal blogs and then I started publishing external because I do, I would do all that reflection on my own personal time.

And yeah. Now it's a document that has a stupid number of pages in it. And the it's [00:03:00] really slow. One of the things that happens is people when they're searching in it, it takes so long to load that they're searching and it will, Google Docs will think that they're typing into the document and so it suggests a random edit to the document because they're actually just searching for something in the document.

So that happens like once a day. I think I'm probably gonna split, it's I think 700 pages or something, and I, again, I add like 20 pages or so a week, which is just silly.

Dion Almaer: Yeah, I know. It's probably one of their stress tests. So we're gonna get into talking about the future, but curious actually to start with the past a little bit.

Like with this round of ai, and gen ai, when did this first hit home for you? Were you super skeptical at first and was there a bit of an aha moment that got you thinking like, maybe there's something real here?

Alex Komoroske: Yeah, I've been following obviously all the papers and interesting developments that Google research and others for many years and watching it slowly plod along and some of the silly things that people were doing with the earlier version of chatbots.

And the moment really hit home for me though, is I have a few groups of people I [00:04:00] facilitate of interesting thinkers. I just like talking to and having collaborative debate with. And once a year or so, we would get an Airbnb, in the woods. And we would go and we'd just have a few days, like 12 people, just for an interesting long-term discussion about what's gonna happen in the next decade about crypto, and not just for crypto, but for all of society.

And then the next year we did generative AI. And that was, I think a few weeks before ChatGPT came out. And we had a number of folks, good friends, people who you know as well in the session. It's just like three days of just thinking and it's pulling on threads. And afterwards I would sit down and I would just for write 60 pages of here's what I took away from that discussion.

And I remember writing down and being like, holy shit. Like this is gonna change everything, and it's on the cusp of, it's, it so many things changed that people haven't even realized yet. And, and then a few weeks later, ChatGPT comes out and it was like, starting bail for everybody.

And I knew at that point that like this was gonna be like, LLMs are not some [00:05:00] flash in the pan. They are not some random technology. Like they are a fundamental disruptive innovation. And I think you need to act like it. I think a lot of people are acting like AI as a sustaining innovation, and I don't think that's true at all.

I think it radically changes a number of things that we haven't even, we're like Wiley Coyote out over the cliff, as a, as an industry. We're like, oh, we're just gonna slap some AI on this. That's not how that works at all, it actually changes fundamentally the way that we write software, the way that we can, we can experience things and it changes the cost structures of everything.

It's a really interesting and fascinating thing, but it was at, yeah, it was that period right before ChatGPT came out that I was like, oh, wow.

Dion Almaer: Got it. That's awesome. So this event is called AI Native Dev. And it's can be a little bit of a double entendre. It can mean different things. So I'm curious what that means for you.

Alex Komoroske: I think people are currently attempting, they're just jamming AI on the sides of the way that they built software before and the way that they would approach problems. And I think when you embrace LLMs, you write [00:06:00] software differently and you think of it, it's almost like how do you pull the software out of the mind of this LLM and you do it in a way that's not traditional collaborating with another person.

You do it a lot more like what Tessl doing. I think of like specs and this designing a thing. Like I had a silly little example. There's this game called Sky Joe. It's a fun little card game and we were playing it and I got really into it. I realized, oh, if you could count the cards, going through you would have a better, you could strategically be making better calls.

And so I wanted to make a little web app to make it so you could just keep track of the cards that had been played outta the deck, and it would tell you that the probability is of different cards coming up or whatever. This is the kind of thing that would've taken me a few hours on a weekend to sit down create a new project from scratch and like slowly add in stuff.

And what I did instead was I built it in 10 minutes. And I just used Anthropic Artifacts and I did it exactly the way I would've done it. I said, first, write a little function that does this. Okay, now put that function in this. Now add [00:07:00] TypeScript typing that allows like this now, like I just told it what I would've done for each of the individual steps,

and in 10 minutes you have this really nice, nicely styled thing that has undo and save and all little features. And I built it like not by writing any code, but by just coaxing it. In the right order with the right steps and things I wanted to do. And that was, that felt like a, a radically different way of building software to me than sitting down and writing the silly little home project that would've taken me a few hours to do.

Dion Almaer: Yeah. Do you think that kind of app, like this notion of a, I think some people are like micro apps and the like, is gonna change the way that a bunch of even like users and non-developers maybe even getting into this world where they can scratch their own itches, where like at the moment maybe they just like, I could do spreadsheets, so I try and use a spreadsheet to do everything that I can like they have these different tools, but they don't feel like they can create custom software.

Alex Komoroske: I think it will. But for a couple of reasons.

So I was, a friend of mine is not particularly technical and they knew I was super into playing around with AI and they [00:08:00] said, I wanna see what it can do. I want have it program and so they went to ChatGPT and they were saying a thing they wanted to do and then they would give it to html and copy and paste it into a thing and then load it into separate tab.

And that step mystified him, I don't understand what's happening. And the feedback loop just took enough time to like copy paste and then sometimes it would tell you. You have to surgically put it back in, the code back into the, 'cause it would only update a small portion of it. And then we switched to Anthropic.

It has Artifacts and that was basically the same loop just in a live preview. And that radically changed it. Instead of it feeling of me handholding him, he just sat there and was doing little things and experimenting with it. And I left and I came back 20 minutes later and he built a relatively complex little toy app and he was like, I guess I'm a programmer now.

I was like, yeah, sure, but it felt like this immediacy, I think felt really nice. I think by default, what we'll see with AI in the current distribution physics is we'll see a proliferation of both hyper centralized, hyper aggregators and profusion of lots of little micro apps that are like [00:09:00] little toys that they don't really do that much on their own.

I think the real trick will be how can you get, figure out a system that allows these micro apps to add up to something much larger than some of their parts, to interact with other micro apps and to other experiences and have this emergent possibility. That's quite challenging. I think that requires a, like a fundamentally different kind of like substrate to be built on.

And that's one of the things that we're working on. So much of the software, so much of the industry is predicated to the idea that software is expensive to write and cheap to run and LLMs mess with both of these assumptions because code that uses LLMs to execute, which is not all code in the future, but a fair bit of it that now has a marginal cost is above zero, which means it's expensive in some significant way that we're not used to.

And secondarily, LLMs are really good at writing crappy software on demand for basically free. So it's not good software often unless you have the right structures and approaches and the right kind of scaffolding to make sure that it's doing the thing that you wanted, like what you all are building.

But it is a, and that these radically change how we think of where you would apply software and [00:10:00] how you would apply it. Clay Shirky has this old essay from, I think it's 2007, called Situated Software, and it's now off the internet, yet Gwern has a mirror of it. It's a great essay and it talks about situated software as software that is situated to a hyper-specific context.

So anytime you've written a spreadsheet formula, it's situated software, it's situated specifically in the context of this particular spreadsheet. And anytime that someone looks at someone else's been a situated software, they got piece of crap. It barely works, it's insecure, it's ugly. But to the person who built it, it's perfect.

It does exactly and precisely what they need. And sometimes in those contexts, actually, this is what happens, it breaks, they say I don't know, just slap some more situated software on the side and especially when LLM's can help you write the situated software, you can get software that's just right, it's just the right size for you, as opposed to this, one size fits none software that we have today in our modern era.

Dion Almaer: Yeah, that makes sense. Some people I see snarky about, I can just do these little toy things, right? You stay in your zone, stay in the micro apps era. I'm an amazing expert developer and you're [00:11:00] never gonna be able to touch me and the domain that I understand and my enterprise software and the like, how do you think over time, especially like LLMs are actually gonna be able to help in very broad ways across of the spectrum of development that's out there, or do you think it will never?

Alex Komoroske: Oh no, it'll for sure and I think folks like Tessl and others are really pushing the limits of what you can get. I think the big story, by the way, the last year people talk about the quality improvements.

To me it's the context length improvements that just radically changes the kinds of questions you could ask. If you wanted to ask before of a large book, what are the major themes of this book? You can't answer it with RAG. It's not possible because it's not like there's a, like this is the theme and that's in the embedding.

Like you have to have a broad context. And the more that you have the broad context, the more that you can see these subtle patterns keep track of and understand where things are going. And so I feel like we're just at the beginning stages of figuring out how to get LLMs to build large bits of software.

I also think one of the things that, is exciting about Tessl's approach [00:12:00] is LLMs, one of their superpowers is, are really good at translation. You can translate English to code. And so before, like the spec is not just some little documentation, it actually is almost the primary semantic thing.

And the actual code is I think like compiling the spec into code, and it's unlike a normal compiler where everyone who's learned C at the beginning, certain point says the compiler's broken. And I can guarantee you this compiler is not broken for the, for this case.

So you learn the compiler is almost always trustworthy. With LLMs, like it can get confused and the thing it gets confused on most is when it's not it's not high locality of the code when it's in multiple files. It has to coordinate based on a specific phrase or word that has to be consistent in these multiple places.

LLMs tend to kinda get a little bit confused by that kind of situation, but there's ways of adding scaffolding and idioms to make it so that they're more likely to stay to a pretty good answer.

Dion Almaer: Yeah, I think one of my realizations around this was feeling like I was using English to poke at [00:13:00] things just like you were talking about but then all of that information was lost.

It was like in the chat session and it was gone and it was like, but that was like my actual record, like Yeah.

Alex Komoroske: It was recording the steps of things to get to the, that result. Yeah.

Dion Almaer: So it felt like I'm compiling this thing and then just taking my compiled output with me. Wait a minute, isn't the other thing the important thing to stare around?

So what if that could be the canonical piece instead and and so that's why specs made a lot of sense to me. As you poked around with all these things, what have you found you've mentioned some of these, like the LLMs to be particularly good at, or bad at when it comes to coding and development in general.

Alex Komoroske: One way think about this is in Jurassic Park when they have, they have gaps in the dinosaur DNA and it fills in with frog DNA kind of, LLMs can do that too, where if you don't bring the specific examples, it just assumes based on the most generic background information of how it fits in.

So they do really well, if your writing it in the same kind of React style [00:14:00] that everybody else writes it, they're really good at that because they've got tons of frog DNA to bring into your particular situation to get really good results. If you're writing an algorithm that's been represented is like nicely cleanly separated and exists in lots of tutorials and Wikipedia articles or whatever, they're also pretty good at reproducing that.

When you have an architecture, like a relatively large architecture that is slightly atypical or it has some weird, stylistic and calling, ways it's been designed, architected, they start getting a little bit funky there and a little bit like they can lose the plot a bit.

Remember that like LLMs can't count. And the, and it's a miracle in some ways that they work reasonably that they can handle some of these cases. One of the reasons like really deeply nested JSON, they lose track of wait, am I in an object context or an array context? And they'll like, miss nests some of the things, partially because it's just really hard to keep track of the counting.

One way of looking at this is they do a really good job of these seen objects like that. So smaller chunks of JSON that are shaped roughly like that, just in way more examples because presumably the JSON you'd see on the [00:15:00] web has a logarithmic kind of fall off of like the larger sizes get increasingly less, less common, and so it doesn't have as much training data to keep track of, and it can't fundamentally count.

And so it just kinda loses track of and just, it's, remember it's all like vibe based programming. It's just, its vibes are extremely dialed in.

Dion Almaer: Yeah. And it's fascinating because like you mentioned there's a, a lot of React code on the internet that has ended up, in these models and the like and then I start to think about, the frog, DNA, it's do we have the kind of meh frog, because most of the DNA that's gotten in there with React code is from the average React developer, not like the React core developer that's out there. And so is there like that magical piece of like actually the shark DNA, that hasn't evolved that much, right?

It's like it's just gotten really good and honed in and so you really want something that's good enough, the, it's got enough content that's out there, but [00:16:00] is also really focused. I'm really curious about the notion of also just like king making as well. If we created a brand new language today, do we have a chance to make it?

And how do you feel, how do you think about the role of data and quality data and golden path data, all of those things with LLMs?

Alex Komoroske: I love the shark data. I'm just writing that down the yeah. LLMs are really good at imitation. They're, so insanely good at imitation that it sometimes looks like innovation, but they're really fundamentally kinda conceptually interpolating across just a very wide swath of human experience and examples.

And it does feel like it pulls us towards the center. It pulls us to the average mushy center. Like when you, and it's writing, its default writing style is like corporate speak because that's the mushy center that at the core, like the, the random, bland, heat, death average kind of. And we get the same thing in programming where we're just kinda I don't know, I'm just gonna use React.

I'm not just do it whatever, the most basic way that you can possibly like basic in the, in the pejorative sense. And I do think it, it, one thing that's [00:17:00] interesting and challenging, like one of the ways that it knows how to weight what things are good is like things that have worked for others in the past are more likely to be replicated and other people to also talk about them and write about them.

And so LLMs aren't random sampling all texts that could exist. They're random sampling the text that humans have found useful enough to propagate, right? And you have this natural thing where humans are selecting to some degree the thing that's reading off of. And, but over time as more and more that selection is done from like this Bayesian kind of averaging process, you lose some of that signal and it does feel like it will pull us more towards a centroid.

It also does feel like to me it'll be much harder to escape local maxima. Because it's just in default. If I don't have a strong opinion about how to write this bit of code and I say, do a thing that has this behavior, what's it gonna reach for? It's gonna reach for the most obvious basic answer of that and put it in there.

And I don't have a strong opinion, so I'm not gonna go through no, no, no, no, do it in this other, style that I think is interesting. And so that's fine. And so now you'll have you get another piece that is now put onto the [00:18:00] whole collection of, of.

Dion Almaer: Yeah, that's funny, now I'm thinking about how to nudge selection to be like this software passes tests a lot versus what are the other parts of selection that could help nudge things for us.

Alex Komoroske: So I've noticed in the past you've mentioned that spending time to define requirements saves extraordinary amounts of time. And so I'm just curious what does requirement, just like when we talk about specs, it's oh, it means lots of different things to different people. Like what do you mean by that?

I think it's partially that one was my direct experience in the last few weeks and I realized it's a pattern I've seen many times like, oh, we understand how to build this.

Alex Komoroske: I've got the idea in my head. We will simply execute this. It turns out exploiting the requirements like you have, like this hyper object in your head of this should work, and all the pieces, it's like this. And explaining to other people is like serialization process, it takes so long and it's like combinatorial kind of thing to really serialize it perfectly.

And the way to communicate it is then factor that serialization into higher level of requirements, which is a process that takes a [00:19:00] ton of time. And so you're always like, oh, we're fine. We're fine. We're just gonna, we're just gonna wing it. I'm just gonna give 'em enough guidance or whatever. But if you do this improperly, you are like, oh, we'll go fast by not doing the requirements.

Or factoring out the worker requirements. And then what you end up with is you realize later that people didn't understand the fundamental thing you were trying to do and they went in these other weird quarters and you have to like, no, actually there's the whole thing. You can't do it that way. You gotta do it this way.

And they go, okay, cool. I had to throw out this entire thing I just did now. And so to them it feels like a total thrash. And the mental model in my head is like a dynamic programming problem. Without memorization done in exactly the wrong order. So each calculation you're doing, 20 times, 30 times or whatever.

And that's what it feels like to do the requirements. Like to me, the requirements process is taking the serialization of the hyper object and then distilling it into a more abstract and condensed version that can be efficiently communicated to somebody else.

Dion Almaer: Yeah, that makes sense. Some developers initially got excited about having these new tools.

Some developers got a little bit nervous at first of, is AI going to take my job? And that [00:20:00] whole meme, when you look ahead five, 10 years out there, like what aspects do you think of software development are gonna remain fundamentally human driven? And what parts do you think we're gonna be somewhat happy to pass over to be AI driven?

Alex Komoroske: Yeah I think that I'm personally excited about AI as a tool for human creativity because it takes the, the bland drudgery and allows you to handle those parts. The parts that aren't exciting or aren't interesting aren't surprising to you in any way. Just allows you to execute on those way faster and one of the powers of infrastructure in general, one of the reasons infrastructure works and creates value in the world is instead of every single individual customer having to build their own lower layers, you can build it once and share it across many. You write it once, you share it with many.

So everybody's getting that value, but the cost is a single instance of it. So you're factoring out the most boring parts of people, lots of people's jobs. LLMs also factor out the most boring parts, the parts that are just the [00:21:00] mundane drudgery. And I think that if you use them properly and see them as a tool to extend your creativity, they allow you to do other interesting new things. Like I've heard a number of people have told me they've started writing poetry, and they're like, yeah, I just, I used to like it in like high school. I haven't done it in a while and now I've got like a brainstorming writing partner that helps me come up with a better rhyme for that or eight different examples of this, or critique the thing with that was like, if I show the poetry even to my spouse, I don't wanna show them like a shitty writer. And so anyway, I think it's those kinds of things that allow people to stretch further. And I think that's an active choice.

It is also just as easy to take those tools and become increasingly passive and increasingly like cool now I don't have to think very hard. No. I think you should use these tools to help you think harder.

Dion Almaer: Yeah, that makes sense. So we both spent a lot of our careers working on the web platform as we mentioned at the beginning.

And I think we both had a love towards the kind of openness of that era. And then we got to see mobile, which [00:22:00] Guy mentioned this, Guypo mentioned this in the keynote, was slightly different shape of of openness, which had different effects as someone who like really studies ecosystem dynamics and the like.

How do you think AI is gonna potentially change the landscape as it comes to what can happen to the existing ecosystems. Like how's this gonna affect the web and mobile and what opportunities it maybe give us in the future to to have change.

Alex Komoroske: Yeah, I think that typically the, like decentralization, centralization is like a tick tock.

It's at the beginning stage of a new disruptive paradigm. It's open and decentralized, and then later it centralizes as the best patterns are found and become like the compounding advantage to the ones who are best at it gets stronger and stronger, and you get to a really boring part with that.

There's just everything interesting has been done, like there's no space for innovation. It's just, I don't know if one of these three big players think that it's allowed to exist and it's allowed to exist. That's insane to me. And so I think my, that's one of my hopes of AI as a disruptive innovation is that it resets and gives [00:23:00] you into this open-ended decentralized world to start again.

I think in some ways AI, there was a real danger, a number like a year and a half ago. I thought that AI would be an inherently centralizing technology. It's so capital intensive that like only a very small number of companies can possibly invest the capital and then now they become the only ones that can train and host these large models. And that was a very real possibility. Like I think looking at a year and a half ago, there was a really real possibility that OpenAI would just be the hyper aggregator, the apex aggregator to rule them all in this new world. And we jumped straight from centralized to hyper centralized.

And like that would be it. And I think very encouragingly for society one Anthropic built a better model. Sonnet 3.5 is my daily driver. I use it every day. I use it all the time. It's like arguably a better model on a number of fronts, which means now there's another, there at least there's multiple horses in the race. Gemini has been making a lot of really interesting progress and then Zuckerberg comes along and Meta just goes, hey, here's an open weights model that is actually one of the best in class, [00:24:00] which really scrambles all the dynamics in a really healthy way.

I think what happens now is instead of the AI model producers having this extreme centralizing force, they look a lot more like to me like cell network providers of extremely capital intensive and also not that much pricing power because they're basically commodities and you have lots of different options to pick from, which is, I think, really beneficial for society because those individual operators don't have that much strategic power.

But all of us benefit from there being nice competition and allows an open ecosystem to thrive on top of that, one of the bets I was asserting a year ago is companies that take AI for granted can do some really interesting things. If you are fighting to build frontier models that compete with the big ones, wow, that is a really tough space to be in.

But if you say, I'm just gonna assume that there's gonna be good competition and there'll be multiple good, cheap options to pick from, and that, competition will drive the quality up and the cost down consistently, that's a pretty safe assumption. I don't care who wins, I don't care which ones of these who, whoever.

I just know that there will be [00:25:00] good, high quality, cheap models to use. And that seems like a pretty safe assumption at this point, which allows an open ecosystem on top, which I think is enormously exciting for society.

Dion Almaer: Yeah, that makes sense. And then another thing that I love about the web ecosystem is browser extensions of all things like it felt like you've got this aftermarket kind of approach where I can come along and someone can build a thing that changes the thing for me. And do you think AI ecosystems are gonna allow for maybe a more personal, custom experiences that I can maybe experience the StreamYard system that I'm in now with a few different tweaks without having to persuade that company to make that change for me, which obviously doesn't scale.

Alex Komoroske: I think so, and I think, I think back to Grease Monkey. I think Grease Monkey was one of the golden eras of the browser of these little scripts that you could put in. And one of my good friends is used to run userscripts.org, which was the kind of centralized community for that.

And I think Grease Monkey was really powerful. The problem with Grease Monkey ultimately was, it was insecure. And fundamentally so [00:26:00] and even as a savvy user choosing a set of scripts to put in your thing, there's like a, there's a curve that looks like this. Like I'll only install 10 of these because I have to really trust that they aren't gonna mess with my Gmail or sell my data or whatever.

And I think the trick is how can you get that kind of thing? And I would never tell my dad to install it. My God. Can you imagine what terrible, crazy stuff he would've installed and what, in running a Grease Monkey. So how do you change the security and privacy model to make it so that actually it's just a purely yep, keep on going and the sky's the limit and everybody should be able to tweak?

I think that's possible. That requires a thinking about the security and privacy models. It can't just be like duct taped on to the existing privacy models.

Dion Almaer: Got it. Makes sense. Okay. I noticed there's some questions I that have been asked here, so I want to get to them so I don't take all of your time.

Min asked, how does a developer new to create a micro app ensure that the code creators created is of good quality and safe that known techniques to use when interacting with an LLM?

Alex Komoroske: This is one of the [00:27:00] things that is why micro apps are useful in this context. If you, certain code when it's compiled, if it compiles, there's a pretty good chance it at least does something correct.

And this is like a, this also by the way is a nice loop for LLMs of if it doesn't compile you sell it back to the LLM. Here's the code you gave me, here's the compile error, fix it. And it can do a pretty good job. That's like kind of automated spoke testing. The problem is like the semantics, to get the semantics of the thing correct.

And one of the nice things about little micro apps and one of the reasons they're a sweet spot is because you can just poke at it and you poke the five buttons in the right order. Does it do the thing I expected? There's not too many, like the commentary space of possibilities isn't that large. But you have to go to interact with it and poke at it and go, yeah, that does the thing I wanted.

Or no, wait a second, it, I hit this button and it did a thing that's wrong. Try again LLM. But like the bigger it gets, the more likely that you've gotten something that has gotten something wrong. And that's where the importance of testing becomes extraordinarily important. And I think one of the reasons that what you all are doing is really interesting is, if you write the spec, it's also easy to write the tests. And I find [00:28:00] that like the biggest reason people don't write tests in practice 'cause it's really, it's such a slog and it's such a pain in the butt and you're, it's, you're doing it in a way that feels very different than writing the code. It only attacks, but it is the thing that allows you to go fast later.

LLMs also are, honestly, they don't get bored. Totally willing to write the boring, annoying tests. And maybe it's not exactly correct, but it's more likely to catch a thing that is wrong in the future. And like maybe you dive in to go, oh, is it, this test was wrong? But that's the thing that gives you that smoke, the smoke test that helps you know that I should dive in and try to debug.

Dion Almaer: Yeah. Okay. Anthony asked, what are the best examples you've seen of stringing together micro apps or their functions? Incumbents today like Zapier for workflows comes to mind.

Alex Komoroske: I think we are not even in the first inning of this. I feel like so far we're seeing really interesting stuff. Like GitHub Spark is interesting and obviously Anthropic Artifacts was interesting and what Vercel is doing with v0 is super interesting.

But these are all individual micro apps. They aren't even distributable [00:29:00] necessarily, like the, for Anthropic it's it's almost an afterthought that you can distribute the Artifact and others, you can't, and so I don't think we, we know what good looks like in this yet. I think that there is a, because we haven't even started, we we as a, as an ecosystem has discovered that micro apps are a sweet spot for LLMs but we haven't yet figured out like what to do with that.

Okay. In a world of lots of micro apps, how do you stitch them together? I think that's an open question that we're still, no one has really gotten a good handle on yet.

Dion Almaer: Yeah. Awesome. Alex, thanks so much for taking the time. I could talk to you for hours about this stuff. It's been really fun.

Alex Komoroske: Thanks for having me.

Dion Almaer: And yeah, find find Alex online. I can't wait to hear more about your startup. It's called Common Tools. Is that right? Yeah. Super excited about that. And yeah, join the Discord and ask questions there too and we'll be looking over it. But Alex, thanks so much.

Alex Komoroske: Of course. Great seeing you. Thanks for having me.

Dion Almaer: Cheers.

Simon Maple: A big thank you there to Dion and Alex. And a reminder that the call for papers for the AI Native Dev Con spring edition [00:30:00] that's happening on May 13th is still open at the time of recording as well as registrations. So head to ainativedev.io/events, where you can submit your proposals to speak for the event, as well as snag your tickets for the event as well for free.

So thanks for listening to the episode of the AI Native Dev and be sure to subscribe as well as hit that thumbs up if you are on YouTube and tune into the next episode. Bye for now.

Thanks for tuning in. Join us next time on the AI Native Dev brought to you by Tessl.

Subscribe to our podcasts here

Welcome to the AI Native Dev Podcast, hosted by Guy Podjarny and Simon Maple. If you're a developer or dev leader, join us as we explore and help shape the future of software development in the AI era.

Subscribe to our podcasts here

Welcome to the AI Native Dev Podcast, hosted by Guy Podjarny and Simon Maple. If you're a developer or dev leader, join us as we explore and help shape the future of software development in the AI era.

Subscribe to our podcasts here

Welcome to the AI Native Dev Podcast, hosted by Guy Podjarny and Simon Maple. If you're a developer or dev leader, join us as we explore and help shape the future of software development in the AI era.

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

JOIN US ON

Discord

Come and join the discussion.

Join

JOIN US ON

Discord

Come and join the discussion.

Join

JOIN US ON

Discord

Come and join the discussion.

Join