Is Code Dead & The $1B Solo Startup Myth - Tom Hulme's conversation with Guy Podjarny
With
Guy Podjarny
15 Apr 2025
Episode Description
At the London GenAI Meetup, Tessl founder Guy Podjarny and GV’s Tom Hulme dive into the future of AI-native development. They explore the shift from code-centric to spec-centric workflows, the rise of LLM-powered tooling, and why most dev tools today lack differentiation, loyalty, and real autonomy. The chat covers the launch of the AI Native Dev Tool Landscape, mapping 170+ tools across the dev lifecycle, and examines what’s missing from the space — from autonomous systems to better design and DevOps solutions. They also discuss the myth of moats in AI and whether a one-person billion-dollar startup is possible. This is a must-watch for anyone building or navigating the next generation of software development.
Overview
Introduction
In this insightful podcast episode, Guy Podjarny, the founder of Tessl, explores the significant paradigm shift from a code-centric to a spec-centric approach in software development. This transition is pivotal to Tessl's mission, as Guy articulates, "What does it mean when you start to talk about actually moving from a code-centric way of developing to a spec-centric approach? This is probably the most important part of what Tessl is going after in my opinion." Understanding this transformation is crucial for grasping Tessl's future focus and the broader implications for cloud-native development.
Main Discussion Topics/Highlights
Cloud-Native Development Practices
Guy Podjarny passionately discusses the advantages of cloud-native development, underscoring the necessity of embracing a diverse array of services. He notes, "The true benefit from cloud-native development, as we call it today, comes from practices and changes that really embrace it — the plethora of services." This reflects the complexity and vastness of the ecosystem, where developers must navigate effectively to harness the full potential of available tools.
Tool Ecosystem Observations
With over 170 tools in the ecosystem, Guy examines how these tools often fall short of delivering on their promises. He poses a critical question: "Help us understand how you think about these tools, and how you categorize them." This section focuses on the importance of integrating natural language interfaces, which can significantly enhance tool utility and user experience.
Generative AI and LLMs Reliability
The reliability of generative AI and large language models (LLMs) is a focal point, with Guy expressing concerns and observations. "We ask ourselves, where is generative AI now? Are LLMs reliable enough today? And we've not really found any fully agentic, reliable enough tools." He emphasizes that these models are most effective when trained with clean data, raising important questions about their current dependability in software development.
Future of Software Development
Guy reflects on the future of software development, pondering the necessity of a translation layer between human and machine languages. He muses, "When we think about things from first principles, what is software development going to look like?" The adaptability of LLMs could reduce the need for maintaining decisions, potentially altering the developer's role and how software is created and maintained.
Language and Development
The podcast raises intriguing points about the prevalence of English or American English in development, with Guy noting, "It's interesting to me that English or American English is becoming the language of development." This trend may lead to the obsolescence of certain programming languages as English becomes the dominant language in coding and development processes.
Summary/Conclusion
The episode wraps up with profound insights into the changing role of developers and the rapid growth potential of companies leveraging these technological advancements. Guy concludes, "The dream is they start to embed memory, and that is a moat. It's understand me." The discussion highlights the transformative influence of AI and cloud-native practices on software development, emphasizing the importance of adaptation and forward-thinking strategies in this ever-evolving field.
Resources
GV Event Blog
A blog written by Tom Hulme about his conversation with Guy Podjarny.
AI Native Dev Tool Landscape
A curated, community-driven map of 170+ AI dev tools across the software lifecycle.
Cursor
A VS Code fork AI IDE focused on fast, AI-assisted coding.
Windsurf
Another AI coding environment similar to Cursor, focused on developer productivity.
Bolt.new
A no-code/low-code platform for building apps fast, targeted at citizen developers.
Base 44
Emerging no-code full stack AI development platform.
Vercel
A popular frontend cloud platform, increasingly incorporating AI-driven features.
Chapters
00:00 - Introduction to AI Native Development
01:40 - The Vision Behind Tessl
03:20 - Rethinking Software Workflows with AI
05:00 - Building Trust and Embracing Change
06:15 - Mapping the AI Dev Tool Landscape
08:10 - Challenges in the Current Ecosystem
10:00 - Copilots, Autonomy, and Differentiation
13:00 - From Code-Centric to Spec-Centric Development
19:00 - The Evolving Role of Developers
25:00 - Future Outlook and Closing Thoughts
Full Script
Guy Podjarny: [00:00:00] And so I think you have to imagine what would happen once trust is gained and change is accepted, if it is correct and that is the AI native destination, then you have to think about what's your journey to it? 'cause if you just start there, nobody's gonna use the product. And the product is not yet worthy of the trust.
A big investment in curation of trying to map out a good number, not at all, all AI dev tools out there, we've mapped out, I think about 170 tools at creation. We already have a bunch of contributions.
Tom Hulme: What is it mean when you start to talk about actually moving from a code centric way of developing to a spec centric, this is probably the most important part of what kind of Tessl is going after in my opinion.
Simon Maple: You are listening to the AI Native Dev brought to you by Tessl.
Hello and welcome to another episode of the AI Native Dev. Today we really [00:01:00] wanted to share with you a very recent meetup recording from London GenAI community and the community is run by our friends at Google Ventures GV at Tom Hulme had a fireside chat with Guy Podjarny, where they covered a number of topics.
Guy compares the AI native dev movement to cloud native and talks about some of the similarities there that lie within. And also talks a little bit about the landscape of AI native dev tools and the recent landscape.ainativedev.io comparison and categorization that was released not so long ago.
Guy also talks a little bit about what spec centric means and how developers will be building in a spec centric way. And also talks about obviously the LLM magic is wonderful, but how do we harness that? How do we actually benefit from the way in which LLMs can generate code? There's a lot to talk about and, I'll hand over straight to both Tom and Guy enjoy the [00:02:00] session.
Tom Hulme: We thought we'd kick off and just give you a quick run through of the evening. So initially, a Guy who we've known for just under a decade since the Snyk days. We'll have a conversation about Guy and team's new company, Tessl.
Many of you will have heard about it, but very few know what they're doing yet, so that's gonna be exciting. But maybe to kick off, we can jump straight into Tessl. We are at an amazing time where every day generative AI is changing. And the way that we are all working is changing. We're gonna focus the conversation mainly in software development.
This initial conversation, but you've talked about AI native software development as a kind of new paradigm. What does that mean?
Guy Podjarny: I guess first of all, thanks for hosting me over here and thanks everybody for, for coming. Hopefully there's a few interesting tidbits. I think, AI is a transformative, a disruptive technology.
And I think when we use it today, when we look at the places where we embrace it, the [00:03:00] most oftentimes it's actually in a sustaining way. It's in a place in which we embrace it into our existing workflows. And that makes sense, right? We don't wanna change much, and so we're taking kind of the way we work today and we automate parts of it.
And that's also true in software development. We've basically created many different ways to like different UXs to create code, and it's not to belittle it. Like those are very valuable and they automated. But I guess our belief, my belief is that the true opportunity in any transformative technology and very much in AI is to rethink the workflows themselves.
And that, that takes a while. It's not as easy because you need to change, you need to modify something. We're gonna have Synthesia here later. That's a great example, right? You can use AI to, sharpen images in video cameras that you take, and that is very valuable. But the true potential comes from some sort of text to video experience that really like a 100x, a 1000x,
like totally changes the game in terms of what you can do. And we've seen that on the development side with cloud. There's, it's nice to move to something that is elastic or to use some sort of cloud [00:04:00] storage of it. The true benefit from cloud native development as we called it today, comes from practices from changes that really embrace it, the sort of the plethora of services.
And so I guess when we talk about AI native development it's a little bit of a mandate to, to dare consider different paths, like different workflows. And I think, we have thesis, we'll talk about that a little bit later, about how that would look like. But I think it's also a community exercise a movement of trying different things that, that challenge the way we work, the way the workflow is to find things that are AI native, that are designed to say, if you thought from first principles, how should I do this given the sort of powerful new beasts that are these LLMs this is how I would build it.
Tom Hulme: And we've sometimes talked about the idea that you can go straight to the end goal, the level five autonomy, or you can build up over time.
Guy Podjarny: Yeah.
Tom Hulme: You are imagining the end state now, and then thinking through how can we get there?
Guy Podjarny: Yeah. And I think to an extent, there's no end state, like of course these things continue to evolve. But yeah, it's I talk, we're not gonna belabor this too much, but I talk about both the level of trust, how much you trust that to get it right. Being a factor and [00:05:00] the level of change, how much do I need to change how I work?
And so I guess in general, I think, startups should anchor in the future. You should think about what would be more needed in five years time and therefore try to build towards it. And so I think you have to imagine what would happen once trust is gained and change is accepted if it is correct and that is the area native destination, then you have to think about
what's your journey to it? 'cause if you just start there you probably, nobody's gonna use the product and the product is not yet worthy of the trust
Tom Hulme: And there's so many tools out there at the moment. We've talked a lot about how it's hard to make sense of the change and the kind of the ecosystem. Help us understand how you think about those tools, how you categorizing them.
Guy Podjarny: Yeah so as luck would have it we've actually done a very thorough project to, to map precisely that. And today we launched the, the AI Native Dev Tool Landscape that's bit of a mouthful.
We need to shorten that out. It is a big investment in curation of trying to map out a good number, not at all, all AI dev tools out there. We've [00:06:00] mapped out, I think about 170 tools at creation. We already have a bunch of contributions proposed that just in the few hours that it's been out that try to bucket all the different AI developer tools, we charted them. We tried to think of many ways in which we can bucket them together to say, what are these tools that help us integrate AI into a software development across its different stages. We put it mostly across the development lifecycle from your kind of product design requirements, prototyping phase into your coding, into your quality assurance and DevOps and and eventually we also included just a section that felt inevitable around, a little bit about AI infrastructure that you might run on. And we gave a little bit of information about each of those, and so we launched that today, we've, we really feel like today you're lost.
If you're a developer and you want to use a tool, you probably know of some five or six different things. And by the way, they all sound the same. We can come back to that a bit. There's no real kind of reasonable way, I'm not even gonna say easy to find out what is out there. They change like [00:07:00] so fast.
And so finding what is there, like even just getting taxonomy for it. So this is a part of a broader initiative we have in the AI native dev to try and help us wrangle the sort of chaotic ecosystem capture these tools. We take submissions from from the community around different changes, different additions that we need to do.
We ourselves will invest and curate them. We'll add more information, case studies, demos, examples of those terminology to try and just pull it together to try and move us towards indeed the understanding of what is AI native development, what are different lenses, different opinions on it, and wrangle it.
And we'll also integrate over time, news, and others to, again, just stay on top of this super important but so messy, so fast changing, so chaotic domain.
Tom Hulme: I think Guy's point is important. This is very much sort of UGC. We are hoping that everyone will submit to this and it'll start to be a living kind of repository. In fact, you're using GitHub to run it in the back.
Guy Podjarny: Indeed.
So we think in general, AI native development is a movement.
It's not something that is one product. We're trying to help scaffold that [00:08:00] help facilitate it. And so we're doing that across many things. We run a virtual conference on it. We host kind of people on the podcast who share their learnings and this is part of it.
So it's open source. It is, it has a repo where you can post contributions on it. Someone does need to curate it. Someone needs to say, yeah, this is just, marketing, versus we try to make sure that we optimize it for the user, the primary user, which is a developer wanting to understand, how do I use this to develop software with AI.
Tom Hulme: Got it. And so if you step back, there's 170 plus tools on there. It's been updated since we took the screen grab. Yeah. What are some of your observations when you look across the ecosystem?
Guy Podjarny: It's messy. I'd say maybe a bit also on challenges we've had, building it. So one the scope of tools is very ill-defined.
I think when we know what an IDE is, we know what a build system is, we know what whatever a GIT repository or a Git platform is in in this domain, both everybody wants to say they are everything, and none of them actually do it to completion. And so there was a lot of features that come from different [00:09:00] disparate parts of the development process. And it's I think very much still in flux about what is a cohesive value proposition. The places where I think we know a little bit is like coding assistance. I think we know what that is, but even those products are now growing to say, hold on, I also have a chat and I'm gonna build a bunch of things on it.
Okay. So are you now an agentic system? I don't think we've fully solved it here. We tried it. We're gonna tag a bunch of things. But I think that is very evident that the scope of tools is hard, and I think that's gonna be hard for people when they adopt it because they don't know precisely where it sits.
I think the second thing we've observed is that there's actually very little loyalty. So this is not yet fully in the landscape, but what we see is two years ago you'd ask people, which IDE do you use? And you have absolutely one year ago. So we get one answer right? Whatever that answer is, you'd get one answer.
Today you talk to people that embrace these tools as we talk to them, people move around, they'd use, Windsurf here and Cursor there. They'd use, multiple different tools trying out to, to create the request. They definitely try out different observability or sort of a, some SRE part of it's [00:10:00] because neither of these tools really deliver on the promise fully.
And so you see a lot of tools that are like making the big promise, delivering a piece of it. But I think that's another aspect which is very, like low loyalty between them. And that very much relates to low differentiation. What are, you see a lot of these tools, you try to analyze them, try to kinda map them out, and they feel very samey.
Again, not to say that they're not valuable, they're very valuable, but differentiation is rare in any of these tools. So I think those are the primaries. In addition to that, I'd say definitely heavy emphasis on co-pilots and assistants. You actually see very few autonomous claims, the ones that are autonomous are almost entirely behind a waiting list, controlled access type level.
And again, probably evident that a lot of the tools are just not fully mature and it's hard, like it's easier to create a promise and to create a demo, but the gap, the chasm between that and actually our reliably working product, let alone an autonomous product is there. So you see many different [00:11:00] copilots.
I think those are the primaries. I think there are some emerging UX patterns as you dig into the product themselves. We're gonna try and tag them. The obvious one being chat, there's newer ones around pointing and commenting threads on it. Visual is getting more and more active on it, especially on the design side.
The ability to interpret an image has boosted. And yet there's a burgeoning open source ecosystem over here as well, which is, which is nice in practically every category. We had some good open source.
Tom Hulme: And then I guess some of the other trends we've seen, just looking at this, one is natural language as an interface.
Something that we're seeing partly is LLMs are good enough at transcription today. If you look at just the code tool, one axis you and I have talked about before is you've got traditional development on one end. You've got completely agentic dev at the other. You've got something around citizen developers next to that lovable Bolt.new, one of our portfolio companies we're excited about.
Also Vercel, maybe then Cursor and Windsurf. Now if you look at Cursor and Windsurf as a sort of example where we meet people that are [00:12:00] switching and literally say it's a two minute switch because VS Code, the UX is the same.
Guy Podjarny: Yep.
Tom Hulme: The sort of foundational model it plugs into is the same. There's no moat here.
Guy Podjarny: Yep.
Tom Hulme: Like how might they differentiate? How do you think about that?
Guy Podjarny: So absolutely agree. They're all very much the same and you move them and the patterns are the same, the natural language or the VS Code fork or whatever those are. I think I think today a lot of them try to differentiate by a claim that they are
better in the way that they deliver the value. Like how well does it complete things or how well does it build? It's very hard to back those claims. It's like a very vibe-testing, a vibe-check type work. And so I think today it's hard to, to me what I see too little of is attempts to really think about that sort of workflow change to destination.
There's so much gold in kind of them hills of going off after today of selling products that would work into the existing workflow and automate on it that just everybody's chasing the exact same thing. I think the right way [00:13:00] for them to differentiate in the long run is to think more long run.
Now granted, maybe some of them have strategies that are not there, but when you even listen to a lot of these founders in podcasts or sort of others, when they describe it, many of them talk about, Hey, we like, we're leaning it, we're delivering the value today, and we're aiming to be fast, to be ahead. You should be fast, you should try to be ahead, but but I think that's not a destination.
So I think what will happen is we'll see a lot of them, like the smaller ones are probably just gonna lost in the shuffle, right? You definitely see cases where there's a hundred different companies doing the same thing. And so for those, it's just hard to even break out of the crowd.
And then for the ones that become big I know Bolt, Lovable, Base44 now came outta somewhere, Vercel. So all of those tools that are have enough, some critical mass to, to emerge on it. I do think that they hit a bit of a wall. Like they hit a bit of, okay, now you've built with these products.
You, you got to some sort of level of functionality. And as long as you don't rethink the workflow, you're limited in the amount of value that you provide.
Tom Hulme: Yeah, makes sense. I think one of the things, we all have recency [00:14:00] bias in everything we do, and I think one of the counter exam or counter arguments would be everyone saw OpenAI, believed that the tech would get disrupted relatively quickly. You could argue it was.
Guy Podjarny: Yep.
Tom Hulme: But it quickly became a consumer business because of basically PLG. It, it became ubiquitous and it's still the sort of standout business. So I think if I had to make an argument for this sort of quick ramp, like these businesses that are getting to a hundred million revenue run rate in less than six months, I think their argument would be, it may be sticky, we need to be the brand and get to scale. And then the dream is we might get to enough data that we can maybe fine tune our own model or we can move the workflow forward that you described.
Guy Podjarny: Right? Yeah. Yeah. And I think it, it's not a unfounded, that claim and you need to get, when you look at I know Cursor's level of adoption, clearly they have a large volume of users now and they probably have some data and they can use it.
I will say that, building better models has not proven to [00:15:00] be a very deep moat. And so if your workflow, this is just an opinion here, but if your workflow is the same, if you are easily swapped, 'cause even people that swore by Cursor still, they try Windsurf and they're fond of it and they're not that loyal.
And so clearly there's something there that's missing. Same for the models. I think on the consumer side, actually, people are even more loyal in chat and such just because they they build their habits, and they around it.
Tom Hulme: And I think the dream is they start to embed memory and that is a moat. It's understand me.
Guy Podjarny: Yeah. It's a moat to the extent
Tom Hulme: we're not there yet.
Guy Podjarny: Yeah. Yeah. Just kinda a challenge of the bit. It's it's a moat to the extent that it's not exportable. And there's a certain overlap between the the type of visibility you want for something to has, to have a human oversight.
And making that be something that is your role.
Tom Hulme: It's actually, I'd not, I'd never thought about that before, but we, I've always thought chat is completely interchangeable, zero friction to move because it's just a text box, natural language. But the foundation models get commoditized by distillation.
The interesting thing is actually your personalized version of a foundation model with memory can probably get [00:16:00] distilled as well.
Guy Podjarny: Yeah.
Tom Hulme: There'll be some sort of way to access it. So maybe even the memory and the customization isn't as defensible. How about the gaps in this landscape? What are the big, as you looked at it, what are the big areas you're surprised no one's building, there's loads of builders here in the audience. What would you be going after having looked at this landscape if you were starting today?
Guy Podjarny: I think, yeah, I don't think there's any stone that is entirely unturned, things like, hey, there's only 20 startups in this space.
There's still room there of it. Yeah. So again, one copilot to, to autonomous, like very lacking in autonomous development or autonomous activity. And I think there's opportunity to find slices that you can perform that are substantial, that are true, that you dig into and that you can provide those autonomously.
I think those are very valuable. We're seeing that a little bit. I think there's definitely more adoption in the so I would almost bucket into these sort of four buckets that we put right. In the product side I think there is less evolution or say less maturity around the design side, around [00:17:00] the requirement side.
There are far fewer tools in that space. I think maybe they're adopted in a little bit less volumes of it. The code side is definitely the most busy many users, but many tools. Like in a large large degree, the QA side, the quality assurance you see a lot of enterprise focus.
And I think there's there's kinda this hope that this reinvigorates testing. I think focusing on specific stacks, focusing on specific niches there is working well. And I think there's still why there is over there and the technology seems to be feasible today. And then lastly in the sort of the DevOps SRE space I think people are they're just very afraid to have autonomy there.
So there are a few companies that are building autonomous agents that are in the calendar, airplane, they're traversal but typically there's just a lot of sifting through data type opportunity. And again, I think niching into a specific solution, a specific domain in which it can provide autonomy feels like a real opportunity.
Tom Hulme: I think, yeah. One of the, just building off that, one of the areas we've observed is we ask ourselves where where is generative AI, are [00:18:00] LLMs kind of reliable enough today? And we've not really found any fully agentic, reliable enough tools. Where they're most reliable is where the training has been cleanest.
So that's obviously coding. Legals are phenomenal for this. That's a space we've invested in a lot for that reason. Customer service is one we are looking at all the time now, whether it be text or voice, it's good enough. So people are starting to ask this question, what about reliability? Because for scale, you have to get there, and it's why coding may actually be representative of what you've gotta see happen to other industries because it was one of the most reliable, fastest.
Guy Podjarny: Yeah and I would say though, in, in copilot mode, 'cause like coding, autonomous coding right now, not that is still failing by the droves. Even in the sort of the top companies. And so
Tom Hulme: You, you shared a fun video before this, which is maybe we're gonna jump in and show in a second, but just on that, around reliability.
Guy Podjarny: Yeah.
Tom Hulme: And this idea that code is, there's just an explosion of code, [00:19:00] probably bad code. What does it mean when you start to talk about actually moving from a code centric way of developing to a spec centric? This is probably the most important part of what kind of Tessl is going after, in my opinion. What is spec centric code?
Guy Podjarny: When we think about things from first principles and say what is software development going to look like? And you have to make some bets, and think about where it's headed. We realized that one of the, one of the constraints that we have today is that we need this translation layer between code and between sort of humans and machines.
And that is satisfied with code and today, like software development is really revolved around that code. And so you get some requirements, you get some sort of tasks you need to build you write some code while you write the code, you make a hundred decisions that never make it outta the code.
Over time you get more requirements, more changes. The code evolves, the requirements get thrown by the wayside, and it's like walking into this room and look at the curtains and the screens and the sort of the choice of desks here and the elevation in the lamp and you don't know which of these are like compliance requests of [00:20:00] accessibility versus things that have been designed in and which ones were budget decisions and which ones are corporate colors.
You just need to guess that, right? And similar to the code, you come along, you look at what's there and you say what am I allowed to change? I don't know, look around apply common sense. They lost a lot of data. And over time, as systems grow, they become very fragile in that fashion.
And and I think we have an opportunity to change that around. So with LLMs, we have the ability to go and think about something at a, on one hand, at a higher abstraction level. And so we can talk about the systems that we want in, in, in natural language and visuals and all of these great tools that LLMs give us today to describe what we want. And secondly is we can choose the line of adaptability that we want to give, to be able to tap into the magic of LLM decision making. And so we can say, Hey, it's really important for us that there's a ramp over here. Using physical terms here, right? And that, the purple color because that is the design decision on it.
But refine, delegating to the LLMs, the placement of the TVs around. And by, by making that decision over [00:21:00] time when TVs change and all of that LLMs are adaptable. So we no longer need to maintain our decisions as much. The LLMs can make that decision. I think today you can go to a Bolt or a, or any of those tools on it, which are powerful still today, but generally you chat away.
And you really try to delegate, like there's a lot of there's a range in which you delegate a lot of decisions on it. You have no way really to define what really matters to you. Or you go, you live in the sort of code world in which you modify you need to review every change that you make and and approve it, which is a dead end, right? 'cause you're like, you can only review codes as much.
Tom Hulme: I remember a couple of years ago when we first started talking about this, you made the point of, 90% of products have external dependencies. You've got open source, you've got the requirement to scale. You've got layers on layers and we've had technical debt in the past.
Maybe, actually what we're talking about is with the new gen AI tools, we are just gonna get an exponential rise in technical debt. And a video that you sent [00:22:00] me, we're gonna play now. 'cause I think it brings to life the risk in this. This is a, I think, a really good kind of summary of the problem
Video: A multiplayer flight simulator that's going to make me rich.
See, this is all just hype. Engineers are not going anywhere. You can't just launch out a flight simulate. It's magic. Engineers are going away soon. Now, make the planes blue. No. Make it realistic like before. The more realistic.
Tom Hulme: Kind of sums it up first.
Guy Podjarny: So I sent it because it just Brings it along. So it's it's, it LLMs are magic, it is, it uses the word on it, right?
They're magic. So we delegate them, but we need to find a way to to coexist, to have a clear line in which we say, these are the decisions that we want as humans, as the owners of these systems, as the accountable entities for these systems of people. And here are the areas in which we want to tap into the LLM magic, Tap into the LLM adaptability.
And think about that kind of played forward, right? Think [00:23:00] about what is, software development, that context. How does software live over time? What about dependencies when things come along? And so we think all of that is made possible in the sort of world of spec centric development.
And at Tessl, what we're trying to do is we're trying to to say, okay what is this? This sort of new creature, this new workflow? Clearly we're gonna get some of it, right? We're gonna get some of it wrong. We're gonna evolve it. I come back to it being a community. Play. But I think eventually going down the sort of the route of just thinking that everything would be human sub supervised or thinking that everything will just be product supervised.
You've seen it work their dead ends, they're gonna limit our ability to use the to tap into these powers.
Tom Hulme: Makes sense. So two final questions for you. Yeah. More forward looking. So first is how does the role of the developer change going forward?
Guy Podjarny: Yeah, I think, so, I think software developers will very much be alive and well.
I think software will be very important, will continue to be critical. The role, the definition of a software developer will change. I think software developers will move either up the architect path in which you need [00:24:00] to make more trade offs, decisions around, like express taste, accept preference about, do you want to prioritize for simplicity or do you want to be extensible?
Those are trade offs, or do you want and you might get AI support for it, but I think that would be a craft in which everybody might be able to, build a simple sort of e-commerce site, but if you wanna build a novel system of it, you would learn how to do that. And the other path is one of a product side of having the product sensibilities of what matters more to users.
What I do think will happen is I think code will disappear. It would remain just like assembly remains, just like bite code remains. There would be rare cases in which you need to deal with it, but most of the time I think the code should go away. And that's not always a pleasant statement.
I love coding. Coding is fun on it. And I think part of our task as a community, and as Tessl specifically, is to keep the fun, to keep the sense of creation when you do that versus feel like you're a supervisor.
Tom Hulme: I think it makes sense to me. Certainly, I can imagine specific languages going away. It's interesting to me that English or American English is becoming the language of [00:25:00] development.
Guy Podjarny: Yeah.
Tom Hulme: In a way that I just didn't expect would happen anytime soon. It's definitely not British English. Yeah. Let's jump into the final question then. We are seeing companies not just grow faster than ever because of PLG, like StackOne, Bolt.new in our portfolio has done an amazing job of zero to 40 million run rate in 14 weeks, something like, that's spoken about a lot. The thing that's most interesting is just the efficiency of the organization. It's a small team. It's approaching $2 million per employee revenue run rate. Yep. So you plot this forward and very quickly people are talking about the sort of first billion dollar one person startup, is that gonna happen?
Guy Podjarny: I think so there's an interesting question about sort of the dollars versus the value, right? I absolutely think that we're not far from a place in which a single person can provide value that is equivalent to a billion dollars in, of revenue today.
I don't think they will get a billion dollars for that time. I think if you harness back, like if you harken back to the beginning of whatever, [00:26:00] mid two thousands, some websites had very dynamic real time websites and they were very hard to build and those companies could gain value or charge more because of that.
Today you can't charge more for that because that is just the expectation and I think consumer expectation will grow. So if a single person can create that sort of world of functionality and value, then another person next to them and another person next to them can create that as well. And so I think supply of quality software will remain. And so I, I guess I, I think there will be some turbulence in value over time, but, but I think that's a bit of a fallacy 'cause it ignores the fact that the supply, I agree will grow substantially and demand will remain fixed or maybe grow somewhat. Maybe there will be a very tiny window in which we can have these sort of extremely efficient companies. And I think over time consumers are gonna expect more.
Tom Hulme: I think if you take the argument, I completely agree with you. If you take the argument to the sort of extreme, in order for that to be true, I think these companies would have to go from zero to a billion dollar run rate within 24 hours.
Guy Podjarny: Yeah.
Tom Hulme: Because the [00:27:00] speed that we can all copy anything is like never before. So it's a race and everyone is now gonna have to run faster than they have before. Yeah. Guy, thank you so much for the time. I'm gonna hand it across to Luna on the GV team who's gonna introduce our two speed pitches.
Thank you dude.
Guy Podjarny: Thank you.
Simon Maple: Well, I hope you enjoyed that session. Thanks very much, Tom and Guypo. And of course, if you're in London, do check out the Google Ventures, the GV London Gen AI meetups to see if there's anything that's on that you could attend in person as well. A couple of reminders. Feel free, if you enjoyed the session to subscribe, leave the thumbs up on our video on YouTube, perhaps.
And yeah, a couple of things that are coming up as well. We have the AI Native Dev Con event, which is coming up on May 13th. So depending on when you're listening to this recording, do go to ainativedev.io/events you can register for free and catch a whole bunch of great content there. So what are you [00:28:00] waiting for, let's go ahead and register for free and we'll see you at the conference.
Thanks for listening, and tune into the next episode. Bye for now.