Podcast

Podcast

The Software Engineering Identity Crisis

With

Annie Vella

22 Apr 2025

Episode Description

In this enlightening episode of the Tessl podcast, host Simon Maple speaks with Annie Vella, a seasoned software engineer and manager, about the significant role of AI in software development. They discuss the challenges of moving from engineering to management, the importance of adapting to new skills, and how historical technology shifts can inform today's practices. Annie's insights provide valuable guidance for engineers navigating the complexities of AI integration and career development.

Overview

Introduction

In a rapidly evolving technological landscape, the integration of Artificial Intelligence (AI) into software development has become a focal point for many engineers and managers. In this episode of the Tessl podcast, host Simon Maple engages in a thought-provoking conversation with Annie Vella, a seasoned software engineer and manager. Annie shares her experiences and insights on how AI tools are reshaping coding practices and the critical transitions engineers face in their careers. This discussion is particularly relevant for developers who are transitioning into management roles or grappling with the implications of AI in their work.

The Role of AI in Software Development

Annie begins by discussing the transformative effect of AI tools in coding. As she states, "The genie is out of the bottle; these tools are here to stay." This statement captures the essence of AI's impact on software development. While AI can generate code, it is crucial for developers to maintain their problem-solving creativity. For example, AI can automate repetitive tasks—such as generating boilerplate code—which allows developers to focus on more complex problem-solving aspects of their work.

However, Annie cautions that this should not lead to complacency. "We have to learn to trust these tools while still maintaining our creative edge," she explains. Developers must view AI as a collaborator rather than a replacement, ensuring that they remain integral to the problem-solving process. This perspective fosters a symbiotic relationship between human ingenuity and technological efficiency, ultimately leading to enhanced productivity.

Transitioning from Engineer to Manager

The transition from engineering roles to management is fraught with challenges, a theme that Annie discusses candidly. She highlights the identity crisis many engineers face when stepping away from hands-on coding. "As a software engineer moving into management, you find your identity in building things, not managing things," she notes. This quote encapsulates the emotional and professional complexities that accompany such a career shift.

Annie stresses the importance of understanding personal motivations for making this shift. Many engineers may feel an attachment to their technical roots, and the prospect of moving into a managerial role can be daunting. She encourages those considering this transition to acknowledge their feelings and to explore the leadership skills that can complement their engineering background. Developing emotional intelligence and communication skills will be essential for successful management.

Embracing New Skills with AI

One of the key takeaways from Annie's conversation is the importance of embracing new skills in the face of technological advancements. She argues that being open to acquiring new skills, particularly in communication and management, is essential for success in the evolving tech landscape. "The quicker I can get something up and running, the better for me," she states, highlighting how AI can accelerate project timelines.

Annie encourages listeners to see AI as a tool for enhancing their capabilities rather than a threat. By developing skills in AI literacy, project management, and effective communication, engineers will be better equipped to thrive in this new environment. This adaptability will not only benefit individual careers but also enhance team dynamics and project outcomes.

Historical Parallels in Technology Adoption

Annie draws parallels between past technological shifts and the current integration of AI in software engineering. Referencing the Industrial Revolution and the introduction of compilers, she suggests that history often repeats itself. "There are instances where engineers have had to adapt to new technologies, and the current shift to AI is no different," she explains. This historical perspective provides valuable context for current challenges.

Engineers today must learn to navigate the complexities of AI while acknowledging that similar transitions have occurred in the past. By embracing change and adaptability, they can cultivate a mindset that is open to innovation and future technological advancements.

The Importance of Cognitive Load Reduction

A significant benefit of integrating AI into software development is the reduction of cognitive load. Annie discusses how AI can provide contextual annotations in code, making it easier for developers to understand changes without becoming overwhelmed by details. "This reduction in cognitive burden is crucial for maintaining productivity," she notes.

By alleviating cognitive load, AI allows developers to focus on higher-level thinking and problem-solving. This shift can lead to more innovative solutions and improved team collaboration. Annie emphasizes the need for engineers to leverage AI tools to enhance their productivity and creativity, which ultimately contributes to more successful project outcomes.

Rethinking Job Skills and Interview Practices

The podcast also delves into how traditional software engineering skills and interview practices may need to evolve in light of AI advancements. Annie suggests that the industry should reconsider the necessity of expertise in hand-coding when AI can perform these tasks efficiently. "As we move forward, we need to rethink what it means to be a strong software engineer," she states.

This may involve shifting interview practices away from traditional coding tests toward evaluating candidates' ability to collaborate with AI and leverage its capabilities effectively. By adjusting hiring criteria, the industry can better align with the skills that will be most relevant in an AI-driven future.

Future Opportunities in Software Engineering

As the conversation comes to a close, Annie emphasizes the importance of future-proofing one's career in software engineering. She encourages engineers to expand their focus to include system architecture and design, preparing for new roles that will emerge as technology continues to advance. "Learning how to elicit requirements and think about system architecture is crucial," she explains.

By developing these skills, engineers can position themselves as valuable assets in a landscape increasingly shaped by AI. This proactive approach not only enhances individual career prospects but also contributes to the overall advancement of the software engineering field.


Summary/Conclusion

In this enlightening episode, Annie Vella shares valuable insights on the integration of AI in software development and the challenges and opportunities it presents for engineers transitioning to management. Key takeaways include:

  • Embrace AI as a tool for creativity and efficiency.

  • Understand the complexities of moving from engineering to management.

  • Be open to learning new skills to stay relevant in a fast-changing industry.

  • Recognize historical patterns in technology adoption to navigate current changes effectively.

Chapters

Chapters:

[00:00] - Core Skills & the Learning Conundrum

[01:00] - Annie’s Commodore 64 Origin Story

[02:00] - Time-Zone Tango: Scheduling the Recording

[03:00] - Unpacking “The Software Engineering Identity Crisis”

[06:00] - Flow State: Piano Keys, Headphones & Coding Joy

[09:30] - The Reluctant Leap from IC to Manager

[12:30] - First Encounters with AI Tools (Windsurf, Cursor, Copilot)

[18:00] - Productivity Gains vs Code-Quality Headaches

[22:00] - Farewell Linked Lists—Which Skills Still Matter?

[26:00] - Systems Thinking & the Rise of the AI Architect

[33:00] - Trust, Non-Determinism & “Seamful” AI

[39:00] - Testing, Evals & New Guardrails

[45:00] - Future Roles: From Builder to Orchestrator

[51:00] - Wrangling Non-Deterministic Systems & Closing Thoughts

Full Script

Simon Maple: You're listening to the AI native dev brought to you by Tessl.

Hello and welcome to another episode of the AI Native Dev. Joining me today is Annie Vella. And Annie is a distinguished engineer at Westpac in [00:01:00] New Zealand. And I loved a story that Annie and I just, just shared when off air, whereby it was six years old, Annie was when she built her first computer, which was a Commodore 64, which was wonderful.

And that really kicked Annie off into the engineering world. Annie's got a very large engineering background from an half IC, half engineering leadership and actually also doing a part-time Masters exploring the impact of AI coding assistance in software engineering at the University of Auckland.

Annie, a massive welcome to the AI Native Dev. How are you?

Annie Vella: Thank you very much, Simon. I'm great and I'm very glad to be here speaking with you today

Simon Maple: and you're in New Zealand, so it was a little bit tricky, a little bit tricky finding time for the recording, but we'll let the viewers work out whether it's AM or pm, your side and my side. But we're 12 hours exactly. 12. No, sorry. 11 hours I think apart.

Annie Vella: Yeah, every time I try and schedule something with somebody overseas, it's a matter of using any tool [00:02:00] possible to work out that time difference it's quite frustrating.

Simon Maple: Absolutely. Oh gosh. Time zones are hard. So we're talking today actually because of a wonderful blog piece that you wrote.

And it, I think it's one of those things that will absolutely resonate with our audience. Your blog is titled, the software engineering identity crisis. And from our point of view, we are thinking a lot of the similar kind of concepts of how our roles and software engineering in general will change with the introduction of AI coding assistance and becoming more of a driver in the world of software engineering.

Why don't we start, perhaps tell us a little bit about the blog, the summary of the blog, the high level view of what you covered.

Annie Vella: Sure. I guess it, it starts with the fact that I have, like you mentioned, been a software engineer in one way, shape or form for a very long time. And I've absolutely loved it.

I've loved being a coder. I've loved being a problem solver. And [00:03:00] with this introduction of this new amazing technology, which it does feel like magic one can't help but wonder, okay what is that going to do to the day-to-day job of a software engineer? And it's exactly the same reason why I chose to do that as a Master's topic.

But I've been digging deep into really what's driving me to ask these questions, right? Is it because I wanna help design better AI tools for people to use? Or is it because I guess deep down inside I fear that we might be losing something, the thing that attracted me and many others to becoming a software engineer in the first place.

And I have been writing a bit about the journey and learning how to use these tools. And I've been using Windsurf. That's probably my go-to at the moment, aside from ChatGPT but for coding Windsurf, I find it incredible. I've written a bit about that, but I really hadn't gotten to the root of what I guess what I think about [00:04:00] a lot, what I like talking about what I'm trying to really get to the bottom of as part of my Masters.

And it came down to, there's a lot of productivity gains to be made with these tools. There's, yes, our roles are changing, but at the core of it, we are losing something as well. And a lot of us software engineers, we've tied our identity to the work that we do. We are very proud of the code that we write or the solutions we come up with.

And that very much depends on what stage of your career you might be at. But certainly in those formative years, when you're learning how to code, how to structure your code how to find the bug, how to fix it, you're actually refining those skills through practice. And if now we are delegating those to, to an AI, to a tool, then you're not gonna get to practice those things anymore.

And so you're not gonna be necessarily forming those skills. But you might also not be doing the thing that, that you enjoy doing the most. And I reflect on, on my years as an IC and also all the time that I've spent coding in my spare time, because [00:05:00] we also do that, right? That moment when you get into the zone and we sometimes we call it flow state.

We might refer to it in different ways, but it's that moment when you know what it is that needs to be done, you've solved the problem in your head to some degree. At least you have a path. Maybe you got some good music on. I know for me it was always like in these big headphones with good fun music playing that, that sort of drowned out the rest of the world.

And then it felt to me like I was playing the piano. Like I, when I was writing code in that mode, I, the world disappeared and it was just me and the computer and I was making the magic happen through my own skills and ability to type fast. 'cause I learned my, I taught myself to, to type quickly when I was pretty young.

And all of that almost doesn't matter anymore. You might not get to do that as much anymore. You might, there's no one preventing you from doing that. But certainly in the workplace, if you can just get an AI to generate at least all the boilerplate code for you much, much faster.

Why? Why wouldn't you? Why would your employer say, [00:06:00] yeah, no, no worries. Just type, type it out slowly. It's okay, we prefer no, it's for sure the, Pandora's box has been opened. The genie's out of the bottle. These tools are here, they're here to stay and they are magical, but they replace a part of us that I think a lot of us quite enjoy doing.

So diving into that, I started thinking more about so what does that really mean though? So fine, perhaps we are letting go of something that meant something that we, we tied ourselves up with. But by the same token some of us have already experienced changes like that in our careers.

And upon reflection I have, by becoming a manager partway through my career, very reluctantly because I really enjoyed the coding and I didn't wanna move away from it. I thought is there a parallel I could draw there? And, the more I thought about it, the more it's actually maybe that's the angle to lean into, to realize that you don't need to choose one or the other.

There is the engineer manager pendulum that Charity Majors talks about. So maybe here there's something similar that we can [00:07:00] swing between the two sides. It doesn't have to be one or the other. Finding that balance might be the right approach for many of us at least.

And yeah, so all of that resulted in a very long blog post.

To be honest it's a lengthy one, but I clearly had a lot to say.

Simon Maple: Yeah. And it's been very well shared. And I guess that, that experience that you had, I guess with, moving from an engineer to a manager clearly, brought back some memories there from when, when you started using Windsurf or other tools, it brought back memories from, I guess when you did move from engineering to a management role, may maybe there's others, actually listeners who have done that similar move and stepped away from the day-to-day coding much more.

I guess that was presumably your inspiration, your motivation for building the post. What, when you move from an engineer to a manager, I guess you, when you manage a team, it's slightly different. Perhaps you're even further away, maybe more managing a project versus, having more of that creative aspect.

[00:08:00] And there are gonna be, I guess there's probably two different styles of roles there in the management of engineering teams. One is probably more hands on, one is one is more kinda like maybe the project management. So there's probably people who are more hands on and more hands off from the development side when you're now using AI tools how much enjoyment do you feel you lose moving more to a prompt based way of developing? Do you have that real connection to writing code or do you still feel like there's a level of creativity, a level of feedback that still provides you with that need that you know that, that thing that we desire in terms of actually being able to create, build and achieve something through, through building code?

Annie Vella: Yeah certainly think that there's still an element of creativity like it, it's probably more in the eye of the beholder, right? So it, it depends on what it is that you personally, as an individual enjoy out of building solutions. I'm a problem [00:09:00] solver. I enjoy doing puzzles of any sort. So for me, using these tools there's a real positive aspect, which is I get to see a result much quicker. Because, let's face it, this sort of stuff that I'm using it for in my own home projects and helping me analyze some of the data that I'm collecting as part of my Masters time is of essence. And the quicker I can get something up and running the better for me.

The sooner I get the insights and stuff. But if I think about the switch from becoming, from being a software engineer to moving into management. Now I don't know if I'm unique in, in this way, but there was a time when I wasn't even that far into my career, I wanna say maybe five or six years into my career.

Thinking of a career as like a 40 year long thing, five years isn't long, right? And already I had people saying, you might like to consider a move into management because you've got some really great soft skills and I think you'd make a great manager. And I just, no, this is not why I became a software engineer.

I like building code. I like [00:10:00] working directly with the computer and like solving the problem myself. Having that sense of accomplishment, that satisfaction of seeing something build and then working out that actually there's a better way I can do it. It's a very personal thing that you're doing one-on-one with the computer, right?

I didn't wanna give that up not so early in my career. And and I had a bit of pressure from various managers to consider a move into management and I kept it at bay until about 10 years into my career when I finally realized that I had reached a point where I wasn't really learning all that much more about how to write better code.

Writing the code didn't seem that hard anymore. I could do it. I knew I could do it. I was teaching myself other skills, like how to debug memory leaks, and, just trying to get into something slightly deeper, something slightly more complex. So eventually I succumbed to the idea that, okay, fine, I will try being a manager because there's no doubt, some skills that I might be able to learn there.

But then that transition is quite hard. And I don't think [00:11:00] many companies recognize that taking someone who was a pretty good software engineer and then moving them into management, it's not always a smooth transition. And it's something that I talk with a lot of people when I'm interviewing for other managers if this is their first foray into management, I always ask them, why what is it that's pushing you towards this?

And there's always a variety of answers, but I make sure that they understand that in, in a management role, you're gonna have different priorities and you're gonna spend your time on different things, so if the job description implies that you might still get to code 50% of the time, that's unlikely to happen.

Like as much as you might want it to happen, managing people or projects or, you know, anything that's more wet wear is, is a whole different ball game. It's a different set of skills. You're leaning into communication skills, into a bit of empathy about understanding the relationships between people, the strengths and weaknesses and how you [00:12:00] pair up people so that you, create opportunities for people to, to learn and do their best work. So you're really learning quite a different skillset that you're practicing day to day.

Simon Maple: And this is the identity crisis that you mentioned because you say in the blog that we find our identity in building things, not managing things.

And I think from the move from an engineer to a manager very often, somewhat, hopefully, at least somewhat it's the individual's choice to, to move into management. So there's almost they're trying to fulfill the decision that they've made with if we draw a parallel to some of the AI tools and engineers using AI tools today, there are gonna be people who are having AI tools forced upon them by, it's amazing we're seeing, a number of companies now that, that are almost like saying, yes, we absolutely want you to use this. And if you're not, you need to be telling us why you're not using these productivity tools, essentially.

Annie Vella: That's right.

Simon Maple: What can we learn from from folks like yourself who have moved from an engineer to a manager role?

How do [00:13:00] we best at least as we go through this identity crisis, again, moving into an AI native world or a software development world with AI tools. How do we best either embrace, this identity crisis or at least manage it in some way whereby we can still succeed even if we do or don't embrace some of the concepts of AI and software engineering?

Annie Vella: Yeah, you are absolutely right. I was thinking about this earlier today as well, just that as a software engineer moving into management, theoretically that's a choice whereas here, perhaps you won't have a choice. Really I think what's important is to probably not resist it because as you say, at sooner or later you're probably gonna be in a job where you're expected to use these tools and learn to use them really well because they're your new toolkit.

Embrace the change because ultimately what it's doing is allowing you to build a new set of skills. And some of us get to the, that [00:14:00] point in our career where we recognize I did that by staying in the same role that I'd been in for a while. I wasn't really gaining a whole lot of new skills, so maybe it was the right time for me to accept a sideways move into a management role where essentially I, I felt like I went back to being a novice and I was learning how to do one-on-ones, how to, set up a hiring process and an interview process and how to do performance reviews and all of that, which was so different. I did not go to university and learn all that stuff.

Like I did the programming that I was taught. But by doing those those jobs and setting all of that up and caring deeply about it, I learned a whole lot of new skills that, that have come quite handy, even when I've returned to IC roles. Just being able to see the world through the lens of what would my manager think about this?

Or why is my manager asking me to do this thing? That feels to me like it makes no sense.

But actually having been in their shoes now I can understand better the perspective that they have. So you do learn so many new skills. [00:15:00] So just accept that, that there's an opportunity to see it through that lens and build up a whole new set of skills around how to, not just how to communicate better, because I think that's one of the fundamental skills of getting good with these tools, but also a whole bunch of new jobs are gonna crop up.

The term AI engineer is now floating around. There's gonna need to be, things built around prompt libraries and specifications, how to write specs really well, or how to verify that the AI is outputting the right code or, the making the right decisions.

We're now working with non-determinism, which is something that probably makes a lot of us quite uncomfortable. So how do we manage that? How do we accept that there's likely to be a lot of unpredictability in, in both using them as AI coding assistants but also if we are building agentic systems. So suddenly there's, there will be new areas of specialization that you can gravitate towards.

So it's not [00:16:00] like the door's closing, it's just shifting. Yeah. And and a whole new set of skills will become apparent as really important into the future. So I guess it's being open to that change.

Simon Maple: Yeah, and I think there's a couple of things there actually. One thing that you mentioned there is, being accepting of that and being like almost very openly wanting to learn those new skills is super important I think. We'll be faced with folks who are just coming into the industry who are ramping up on these skills so fast. Almost like it's the way versus a change. And there are individuals on the, in the industry who have been in the industry for many years who are gonna be at a disadvantage somewhat because people are jumping into this AI space so fast.

However, those people have a massive advantage in understanding and knowing how software is built from the ground up. And we were talking about this off air and you mentioned, about going through that experience of, there's a, there's an issue in prod, we need to understand why.

We need to [00:17:00] learn the lessons essentially of everything from bad code to bad design, bad architectures as to why these things happen. And so it's, we're in a, people who are in the industry now have that unique that, that unique knowledge of, I guess that mechanical sympathy of how things work under the covers as well as the opportunity to learn about learn about the new AI technologies on top.

And that's gonna probably be fairly unique somewhat in many years to come, having that in-depth knowledge.

Annie Vella: Yeah. And I guess there are parallels in history, like I mentioned, the industrial revolution, which obviously I wasn't around for. History repeats itself. So I think we, there are instances like, like the one that I mentioned, but also, I wasn't around when the, when compilers were introduced, but I imagine that the generation of engineers that were halfway through their careers when that shift happened they also had to learn to trust this thing that was now outputting the code that they previously had to write by [00:18:00] hand and understand intimately.

There would've been, I'm sure lots of criticism on the compiler cannot possibly create assembler as, as good as I can and all of that. Or how am I gonna know how to debug it because I didn't write it. There are lessons we can look back on and see how did they work through that shift and how were they either at an advantage or a disadvantage and what can we do to help those that will find it more challenging for a variety of reasons, actually on that topic I came across a very long 290 page World Economic Forum on skills and the job market and looking forward to 2030. Seems like a long way away. It's really not, but it's a huge document. So clearly I have not read it all, but what caught my attention was this sort of quadrants on what skills will become more important or less important.

And the one that sort of stands out as the most important, what is currently already important and it will become more important was [00:19:00] resilience, flexibility, and agility. So those are the sorts of like really core skills. Do we teach that in school? Is that something that you can go to university and study? This is what makes this shift so interesting that a lot of the skills that appear to be becoming very critical are things that you mature into.

And that's where my conundrum is. How do you teach those? How do you short circuit that, that experience-based deliberate practice. Learning process and moon shoot right to you are already critically thinking. You are thinking at a higher level. You have this holistic perspective, like that's the right hand side of the Dreyfus model where you've reached a level of expert and you're functioning from a level of intuition.

How do you build intuition if you haven't lived through it? So

Simon Maple: yeah,

Annie Vella: that's the big question mark for me.

Simon Maple: Yeah very interesting. And in, in the blog you talk about shifting roles, which is, obviously a big part of the post. And you mentioned going from effectively builders to managers, from creators to [00:20:00] orchestrators.

Instead of writing code to managing AI systems you referenced actually Patrick Debois who wrote a, who wrote the, the four patterns of AI native development which is still something that is very much up for community involvement and contributions. Patrick talks about going from implementation to intent, from producer to manager, which is obviously very aligned with your post from delivery to discovery and from content to knowledge.

We'll link both of these posts in the show notes as well, but in, in producer to manager, which I think is most relevant here, Patrick talks about the fact that, first of all, in fact, there's a few things here that are probably quite relevant. He talks about AI produces the code, you review the code.

And I think that's a problem. You know, we are not good at reviewing other people's code anyway, so this is gonna be, we'll talk about this a little bit in terms of how much of a skill we need to really build there. And I guess fun wise, creativity wise how interesting that's gonna be.

Patrick also talks about [00:21:00] the cognitive load reduction, which is interesting, talking about how AI can effectively give us contextual annotations and explain much, much more of what code changes are happening versus perhaps a diff with red and green highlights, acceptance fatigue which is a big problem and I think this happens a lot with code review, but will very much likely happen. I do a lot more than I wish to admit with Cursor and things like that where some, it worked the first few times. Let's just keep accepting it and trusting whether it does that, is that a good thing as things become reliable enough?

And of course the situational and aware, awareness. Effectively when something fails, can AI help us with that? Because if we didn't write that code it's hard for us to be able to identify where something went wrong. So yeah, you covered this very lovely, very beautifully in, in your blog as well with creators from writing code to orchestrators, which is effectively managing those AI systems. And you talk [00:22:00] a little bit about reclaiming responsibilities.

Doing more design work and systems thinking and things like that, which today is a very specialist area and people, specific people do that.

So I guess let's take that a little bit from the top down. I've mentioned a few things there. First of all, if AI is producing the code and we review it what's the fun in that, Annie?

Annie Vella: Yeah, that's a really great question. And you're right we don't make great reviewers.

I know as a software engineer or even as a manager of software engineers begging people to please review your code. You've got this PR you'd really like to get it merged in. But it's a bit long and I'm sorry about that. I went overboard with the refactoring.

It's hard enough to do that when you've written all the code yourself, so there isn't just screeds of it. How do we make it fun? Whoever solves that problem will probably be quite a rich person. So I love the ideas that, that Patrick writes about with the, the better than just the green and red diffs.

You [00:23:00] thinking back to the days when we didn't even have great diff tools. We just, I think that the trick with using these tools and it's something that I mentioned to people who are just starting to play with it is source control is your friend, like frequent commits, frequent PRs because it can just get away from you.

It sounds like you've experienced that a bit yourself. Like it, it, before you know it, you've changed 18 files in ways that you didn't expect to, and now it's becoming a bit overwhelming even for yourself to know which ones to pick for your PR. That's not a great state to be in 'cause you're in, you're gonna end up committing stuff that we, if you're honest with yourself, you probably haven't really checked.

And then the next person in line who reviews your code may not really check either because vigilance fatigue is a thing and yeah. That, that's going to become a bigger problem, I imagine. And I think that the 2024 Dora report already starts to point at that. They've shown that for all of the productivity gains and even the increase in code quality we are [00:24:00] seeing a slight decrease in that delivery stability.

Potentially attributed to the fact that PRs are getting bigger, people are just accepting code that may have more issues in it. And therefore, there's a bit more code churn where we're seeing an increase in the, a more recent GitClear report that I mentioned in my blog post showed some pretty staggering figures on code duplication and increases in code duplication, increases in that code churn, code that's released and then has to be quickly modified and re-released. Likely because it had a problem in it. So yeah, I think we're gonna, the Dora report will be an interesting thing to keep an eye on over the next few years as we go much faster. But are we more productive if we're actually just rolling back code more often and having to redo the same bit of work and like probably producing a lot more code, but maybe not in the ways that we thought we would gain the productivity and, we, we are gaining productivity here and [00:25:00] then losing it here.

Simon Maple: Yeah.

Annie Vella: You're just shifting your focus.

Simon Maple: Yeah. I think over time as well, over time, we are gonna be, if we're stepping away from code that much, are we the right people to actually decide what good code looks like?

Or do we actually need to have much more reliance on testing and autonomy to be able to say, irrespective of how good this code is, it is doing the right thing, or it's not doing the right thing. And yes, we can do things to, to make things more performant and so forth or maybe refactor it to be more maintainable, better looking code, but we know it's doing the right thing or we know it's doing the wrong thing.

And I feel sometimes actually the review almost needs to be, more focused potentially around the correctness versus the style which I think a lot of reviews typically over rotate on.

So yeah. I wonder how much of that can be automated out through much better testing, which we, I guess historically have never been great at.

Simon Maple: But we need to rely much more upon [00:26:00] it. 'cause that's probably far more interesting than actually sitting through looking at code reviews.

Annie Vella: When I do code reviews, not I don't do code reviews in the sense that I am the only person reviewing code, but I still read through code because one, I find it interesting and two, it, it's the closest I'm going to get to the truth.

Yeah. Like when somebody says, oh, we solved the problem, rather than asking them to describe it to me, if they're busy, I can probably just go look at the code. And over the years you get, you get pretty good at recognizing the shape of code. I like to, that's what I'd like to refer it as.

If I see it on a page, as long as it's relatively well structured code, you can pretty quickly recognize the flow, and, and I guess that's maybe a skill that we will need to build into one another. How do you get good at recognizing which parts of the code you should spend more time reviewing for that element of correctness that you very rightly pointed out because testing is automated. Testing absolutely verification. Today I came across a blog [00:27:00] post that talked about product managers now starting to focus on this idea of building evals. As a sort of analogy to unit testing, which was for more deterministic systems, product managers who are perhaps looking to build

very cool agentic systems that can book a flight to wherever you ask it to on your behalf, please find me a great weekend getaway somewhere around San Francisco and off it goes, except half the time you end up with a flight booked to middle of nowhere. And that's not what you asked for.

So how do you prevent that from happening? They're starting to come up much like we've started to talk about spec driven and development and stuff. Product managers are talking about this concept of evals. And that's evaluating that the output was in line with what they expected it to be. And I don't know what sort of tools they're imagining, but you start to see the importance of that safety net, the guardrail that you know and the more [00:28:00] automation you can put around that, the better.

Because otherwise we're all just gonna become AI babysitters, just watching its output and preventing the things to leak in ways that we don't want them to. And I'm not sure that there's as much joyfulness in that type of work as there is in being the creator.

Simon Maple: Yeah. Yeah. No I completely agree. I completely agree. When we think about, when we think about skills that developers are gonna need to be successful and to be that creator, we talked about some of the kinda the flexibility, the ability to change those types of things, from a skills point of view, that is more more specific to, to, development whether it's, things like, I guess prompt engineering and things like that. What do you feel are things that are gonna probably be hardest to let go of from a development point of view? And perhaps there's some analogy here to, I guess moving from engineering to management and once we move away from them or deprioritize those, what do you feel are the most [00:29:00] important skills that we need to grow to be successful as an AI assisted developer or an AI native developer?

Annie Vella: What that makes me think of is, I don't know when the last time you sat an interview for a software engineering role yourself, Simon, but we still tend to ask a lot of very, very detailed question. So I'll tell you a little story. I interviewed for a job at Google when I was about three years into my career. It was my absolute dream to work for Google. And it, this was before the days of video conferencing, so it was literally over the phone. I had to hold a telephone up to my and try and solve some very interesting puzzle, brain teaser questions.

I guess it was just before the days of leet code, right? Needless to say, I didn't pass that interview. Eventually I was offered the job, but it was not in the domain that I wanted. It was a software engineer and test, and I had an issue with that, and that's for another blog.

But throughout the years I've noticed that we still quiz software engineers on their ability to solve [00:30:00] these sorting algorithms or search algorithms or like recursion. Can you reverse a linked list? Like the industry still believes, and maybe it's changing now, but we have until now still believe that those skills are absolutely critical to you being employed as a very skilled software engineer.

And then I'd say 90% of the time, you join the company and you don't get to work on things like that. You're, you are, stitching together libraries that someone else has written the fancy sorting algorithm for. Why on earth would you go and write your own, unless you work for Google and you're probably not going to be building the next fancy sorting algorithm from scratch.

And it means that there are literally people who enjoy going through those sorts of elite coding exercises on the, there's a reason that website exists, right? There's leaderboards of people who can solve the most complex problems the fastest. So that I don't think will be very important anymore.

That's a skill that a lot of [00:31:00] software engineers have taken to be the core of being a really strong software engineer. That is a school, I think we're gonna need to learn to let go of both in interviews and personally, it just, it won't make much sense anymore. Why would you need to be an expert at writing that level of code when an AI can probably generate something good enough for you.

Simon Maple: That alone leaves a big gap. Because when I think of that, I think of a really even today, like I'm not close to product code, let's say today, but I think of those kind of things as it's challenging.

They're problem solving. They're interesting problems to solve. Do we have a replacement for that? Or do we need to learn to love other challenges?

Annie Vella: I think it, it's you're going up a level, like in terms of a level of abstraction perhaps.

I think it's still very important to understand how systems work, how they interact, what, where your interfaces are and how to make sure that you protect parts of the [00:32:00] system from, from a resilience perspective. The SRE, these concepts are still important, but do they give you that dopamine hit that you get when you finally get that recursion to work the way that you want it to.

I'm not convinced it's the same.

So I don't know exactly what you would replace it with, maybe for fun you might still go and do some of that type of work, but it's that's gonna diminish it. It can't, possibly not, I suppose it just, there might still be some people that enjoy writing a bit of assembly just because it's something that they enjoy doing a few years ago.

But it's not something that you probably need to do for an interview or in your day-to-day job anymore. So that's what I mean, I guess this is where the identity crisis blog posts, why it meant so much to me because I, there are some gaps there that I can't square up. I can't see precisely how we're gonna get quite the same type of enjoyment out of this type of role, but certainly knowing for the next, for the foreseeable future, there is still going to be plenty of work in understanding systems, knowing [00:33:00] how to hand code or hand change code, because we, software is everywhere. It's embedded into everything and we all know that there are still older systems that run on very old technology, old programming languages.

There are still plenty of cobalt developers out there who are gainfully employed, and I'm sure that 30 years ago someone was probably thinking that was a time limited offer, and yet here we are in 2025 and I'm good friends with a few, so I know that. Things will change. Some things will change much quicker and some people will move with that change quickly and others will take their time and perhaps really find a niche for themselves and being able to leverage those skills.

But. Yeah. This is what makes this like topic so fascinating and so quickly evolving.

Simon Maple: Yeah.

Annie Vella: And what we're talking about today may not be valid in a week's time.

Simon Maple: Yeah, very true. Very true. And certainly those developers are very well paid as well. Those, yeah. [00:34:00]

Annie Vella: Maybe we've got something to look forward to.

Simon Maple: There we go. There we go. So you mentioned systems thinking, those that, like when we think about that level of abstraction, we are thinking more about the architecture, more about the systems, more about the interconnectivity between those apps. Are those gonna be more of the skills, I guess from an application point of view that we need to, that we need to, either, either grow or recognize we need to spend more effort in?

Because I think those are some of the parts that actually make the applications you know that much better because it's, it has a greater impact in very often with how they actually work and how they run and how they work with each other and how they're maintainable.

Annie Vella: Yeah. How successful they are in the end, right? Do they even solve the customer problem that you were trying to solve? That's where I think the other part of that blog post and another one that I've written as well, comes into play. Something I noticed during my career, and I suspect there's many factors at play here, but when I went to university and did my compsci degree, I was taught the whole software [00:35:00] development lifecycle from eliciting requirements by interviewing customers, understanding what the problem they were trying to solve was trying to steer them away from telling me what the, what they thought the solution should be. Working through use cases. At the time UML was still quite popular, so modeling it all, using UML and working through some scenarios for each of those use cases.

And then thinking about all of the the testing that would need to be done, the different types of testing or the edge cases and then building it and applying all the testing thinking and then maintaining it and upgrading it and all of that. So the full end to end. I was lucky enough to do all of those things in a number of the jobs I've had.

And part of it might be that I've also, a lot of the jobs I've had have been at smaller companies, either startups, like quite small startups, 10 people, even less in some cases, up to, scale up with 400 ish people. And in those sorts of organizations, you tend to wear a lot of hats. Whatever needs doing [00:36:00] if you've got either the skills to do it or the interest, or there's no one else.

So you're in, you end up exercising those skills and practicing them and learning as you go. And you build up this holistic picture of what it takes to build a system like this.

Simon Maple: Yeah.

Annie Vella: But throughout the years what, at least I feel like I've noticed is this huge demand for software engineering, because, like I said before, software just runs the world now, doesn't it?

Imagine a world where you don't have a mobile phone in your pocket with constant internet and all the apps that you rely on today, the games you play and the media that you spend too much time scrolling through. Imagine a world without that it's really hard to imagine.

But software runs all of that. So behind the scenes there are all these software engineers. Busily building up all of these systems that we just take for granted today, right? Yeah. So that has meant a huge increase in demand for the ability to build software. And no matter how hard universities tried, they still weren't pumping out enough graduates who could do the jobs.

So I, I think what ended up happening [00:37:00] is we recognized as an in industry that we needed to apply a bit of specialization here. So of those who went through the appropriate level of education, whatever that means in the context of the individual, you now get to focus on coding because no one else can do that.

But you should do the coding. We're gonna get someone else to design it. We'll give that to an architect and we're gonna get someone else to understand what the problem was. That's a product owner or a product manager, and we're gonna get someone else to test it for you. So you don't have to waste your time on that someone else's job.

And we'll get, operations, they're gonna deploy it for you and run it, and they'll be on call. So don't worry about it. And, and this isn't the case across the board. I know that there will be many engineers out there who do the whole gamut like I described before, but there are equally a lot of us who ended up working in roles where we were just coding.

That's all we were expected to do, which means that we haven't had the opportunity to practice those other skills as much, that they are someone else's job. And those people have become very good [00:38:00] at those jobs. So what happens now when AI is able to do the bit that we've specialized in rather well and a lot faster than most humans can type, then we need to start looking a little bit broader at some of those other areas that previously some of us got to do, in, in that sense, learning how to elicit requirements. That's probably a really good skill to start looking for opportunities to, to practice and to learn. Is it quite as enjoyable as reversing a linked list? Maybe not everyone, there will be value in that. And architecting systems, that's the technology career ladder or the software engineering career ladder seems to put an architect kind of at the top of the food chain, right? Like you, you have to have gone through all the hard yards before you get to be an architect. And it doesn't really make sense to come out of university as a software architect. You haven't felt the pain yet. You have to have lived through working out why that a architecture just doesn't make sense, so how do you [00:39:00] teach those skills without having had that background? Again, we're back to, the beginning of the conversation. It's hard to even fathom how you're gonna learn those experiences, but at least for anyone who's currently a software engineer, start looking for opportunities to think about the bigger picture, the architecture, the system design that fits best in this particular use case.

Wow. If you're gonna start looking at agentic systems, that's a whole new area that there really aren't that many specialists, and yet because it's such a new domain, in systems where you're actually using an LLM as the orchestrator that decides for itself which tool or skill to use next.

How do you even test that? Because now it's non-deterministic. And when we're not very good at thinking through non-determinism as that's the unpredictability is a bit unsettling. So

Simon Maple: Yeah. And the more we go down that, that avenue, I think trust becomes a massive deal. Particularly because when we are thinking about agentic [00:40:00] and taking on greater tasks it'll be interesting there's definitely a journey that we need to go on here with trusting results from LLMs. It's amazing actually, just in the short time that, that LLMs have or AI has really been more mainstream in the last couple of years, how much more or how much less should I say trust and security is talked about in general.

I remember it was a case of, oh my gosh, we can't use this in our production environment, so we need to go through the CISO first to work out whether we can use this tool. These days it's oh my God people are rushing to try and work out how they can make those productivity gains from, whether it's from the board, from the top down, the CEO down.

But I think as we get more autonomous with our development I guess what is the trust dynamic in terms of us first of all understanding whether the code is doing the right thing. But secondly, us stepping away from that code, we can then step away from that code if we truly trust the AI that is, [00:41:00] that is generating that what do we really need, do you feel to actually truly trust this process?

Annie Vella: So the research that I've seen on trust is emerging. I would say that there's a lot more research or there has been a lot more research around the productivity gains, which is probably why, as you say, so many companies are tripping over themselves to, to get in on that action. But the the trust aspect, what, I've experienced it a bit myself, right these through just so using these tools, but you, you have this sense that expectations are pretty high because, you've read about it, your first experience with it was pretty good. So you've got, it's pretty, pretty high expectations, but then the unpredictability of it starts to erode that trust and it's hard to rebuild it.

Whereas with humans, you start off with less trust, but you build it up over time through interactions. I remember doing a lot of reading about this when I was trying to be the best [00:42:00] engineering manager that I could be. How do you gain that trust sufficient to be able to give people constructive feedback? 'cause no one really likes to hear constructive feedback. People say that they want all the feedback 'cause it helps them grow. And that is absolutely true. But it can hurt to hear something that that you've not done well or that you could improve. And so how do you get better at giving that type of feedback?

The trick is you need to build trust. You can't just give feedback to someone who you have no relationship with and expect that they're gonna take it on, on the nose. They're just gonna accept your feedback and you're a nobody to them. But if you've taken the time to have, I think it's something like three positive interactions with somebody, then you are more likely to have a positive interaction about giving them some feedback.

So that's how humans build up, our trust with one another. And hopefully over time it increases and you learn to finish each other's sentences and if something happens to break that trust you then, you can rebuild it through a [00:43:00] lot of effort. With AI it feels a bit different.

Like you start off maybe having a lot of trust in it, and then you hit a brick wall when it's going round in circles and producing something that's not at all what you asked for, and you're telling it that's not what you asked for, and it's agreeing with you because it often, it doesn't signal that doubt, right?

That's one of the main reasons why that trust erodes, because it's so confident in telling you the wrong thing. And then all you have to do is ask it about it, and then it changes its mind completely. And you're like, no, I was just asking a question and I didn't expect you to go and undo the, just stop please.

Then so that, that trust is, is really important. But how do we build it? So I actually, I read a very interesting paper about something that Google researchers are calling seamful AI. So I'm not sure I'd heard the word seamful before, but if you think about most of the time we are building seamless experiences, right?

We're trying to hide away the seam so that you're not even aware of [00:44:00] it. But what this paper talks about is maybe that's the wrong approach, maybe to in, in increase creativity and confidence, what we actually, in using particularly AI coding assistance and software engineering maybe what we need to do is actually highlight those seams and use them to our advantage.

For example, one of the use cases they suggest is in those cases where the AI is, not confident, it should not pretend to be confident and it should tell you. Here's my opinion on how you could, here's some code that would solve the problem, but I have a confident, a confidence rating of 20% and that could be an indicator to the human in the loop, the developer to go, ah, this is one of those situations where I need to take over a bit more and not just trust the AI, for boilerplate stuff, it should have a confidence level of much, much higher because it's just boilerplate code.

There's not too many different ways to solve that problem, so go for it. And so maybe you can calibrate that trust through [00:45:00] these very intentional, subtle hints.

Simon Maple: Equally, code reviews then can really be focused on the areas where it has least trust rather than try and spend your time equally across a code review the areas that are probably highest risk and also lowest trust from the AI. So those are the areas that, that we as humans can work together with AI to, to make it more solid.

Annie Vella: Another paper that similar group of authors had written about was that they found upon interviewing software engineers that creativity in software engineering isn't necessarily what you think it is.

It is not all about something totally novel. It's about building up a novel way of stitching together existing components and libraries and frameworks, like essentially finding novel ways to reuse and put together bigger systems from existing components. In terms of creativity, then in order to encourage creativity, perhaps in those more complex pieces of logic [00:46:00] that you're getting some help from an AI coding assistant with, from it, if it has multiple ways that it could solve that problem, some of which might be, might encourage reuse, for example, then it could give you more than one option and explain why in some cases this would be better for reusability.

And in these cases you might prioritize these aspects of it and really highlight that there are many choices to be made here. The AI is not going to make that decision for you. You get to make that decision because that's what, how you express your creativity. And through that interaction, perhaps a bit more trust is built as well.

Simon Maple: Yeah. Yeah, I like that. I would trust if AI, if LLMs gave me multiple options and I chose the right one, and I could see it's, I don't wanna say thinking, but I could see its process that would fill me with more trust being able to say, actually, yeah this is good.

And I, I would actually feel more ownership of that as well, which is probably another important thing. Yeah. So I guess as we think about the identity pendulum, [00:47:00] as you mentioned, Charity Majors refers to it. When we think about the potential careers that could exist in this world, whereby we're changing identity, I guess do you still see the, a software engineer being, being even a title. Do you feel there are gonna be various flavors of software engineers? Let's get the crystal ball out and try and think of what, what we think may be software engineers in the AI era, what the job looks like.

Annie Vella: Yeah. I think we're going to need to become very comfortable using AI.

So I, I think we probably take, some of us might take it for granted that, oh, it's just like talking to another human because it actually, the, there's a lot of similarities between interacting. If you're prompting, if you're chatting with an AI, it really is like talking to another human and, and in fact, in my research, what I've seen is when people are asked like, how would you improve your prompt engineering skills?

The, a lot of answers are around improve your communication skills, lean [00:48:00] into that. It's a, some people think of it as a more formal means of communication. Some think it's actually just like talking to one of your colleagues. And explaining to them how you would like something done, maybe getting good at writing, maybe getting good at teaching and leading in that sense.

In terms of, yeah, if that's the role or a lot more of that role, then, are you still a software engineer? I think the role will be around for a while to come. Thankfully, because like I said earlier, there's still so many systems that, that need the skills that we have today. But learning how to use the AI, a variety of AIs as well, because they have their nuances, right?

Different models even have their nuances. The Windsurf and Cursor are quite different to something like GitHub Copilot, although it's catching up. And then there's the multimodal AIs that you can use to generate images and stuff. So getting good at using a variety of these to supercharge yourself.

I think those are skills that any individual really, not [00:49:00] just software engineers, but any individual should get with good at. And then beyond that, I think honestly think through the new problems that will be created as a result of working in this new way, right? Because inevitably over time, we are going to be writing code faster because that vigilance decrement that we talked about, the, just produce more and more and more.

And so we, we are absolutely going to see as we are already starting to see code take a different shape and perhaps a little bit less stable. So maybe focusing on the tooling that would help identify those problems before they happen. Highlight, where do you imagine being able to look into a software solution from the outside in and go this version that you're about to release to production has more change in it than I think you realize.

Are you aware of the fact that this business logic has now changed? That maybe that's a new tool that no one's built yet, so that could become a whole domain that people start [00:50:00] focusing on, that the test automation is, we have a vision of what test automation looks like today, but maybe if we start considering these evals, that's another extension of it.

And there'll be tooling around that. So maybe you could become more of an expert in that domain. And so there will be new specialities that people can go and focus on. But I think that the critical thing there is imagination. I think that's probably a skill that we don't talk about often.

It's odd to call it a skill unless you were hoping to become a children's book writer or something, and where imagination is critical. But being able to imagine something that, that's actually where it all starts. If you can imagine something with these tools today, you can probably build a version of it.

But you need to imagine it. So where do you get that curiosity and that seed of imagination from?

Simon Maple: Yeah. And very often being too deep in the weeds actually pulls us away from that. Yeah. And I really like the analogy that you pulled back before as well with the kind of architect role that today people actually do strive for.

And it's seen as that [00:51:00] progression from an engineer to an architect. And I think I personally see that as a, as a potential future for an engineer where they will still be that engineer, but it'll be an architect role as an engineer.

Annie Vella: Yeah.

Simon Maple: Yeah.

Annie Vella: And I was just gonna say one more thing. Much like during the industrial revolution from what is written about it, the those who were doing the job manually, some of them moved into roles where they really understood the tools, the machinery that could now do the job that they used to do so that they could wrangle them, just piece them together, improve them, fix them when they broke, so at their core LLMs are probably a special type of work that, not every software engineer is going to become the data scientist, AI specialist, however that's like the beating heart, I reckon that's going to drive a lot of other systems, be it because that's the LLM is the [00:52:00] orchestrator of an agentic system, or it's underlying a number of agents within your IDE or something.

So understanding how those systems are pieced together, the telemetry, the observability, how do you wrangle a system that is non-deterministic in nature that has it at its core, this incredible like computational brain that is making its own decisions.

Simon Maple: Yeah.

Annie Vella: Become an expert at running those systems, which are completely different to the systems we run today.

Similar in that they're probably still gonna use a database and they're probably still gonna run on a server somewhere, but those things might remain the same, but the orchestration of it will be quite different. Learning to wrangle those systems and fix them when they break and be able to identify where the issue is that will become a quite an interesting job, I imagine. Maybe that's the architect of the future as well.

Simon Maple: Yeah. And history's shown us time and time again how these kind of disruptive changes creates these jobs [00:53:00] exactly like you just said. And I think that's definitely the destination we need to strive for. Annie, thank you so much. I must encourage those who haven't read your blog to absolutely take the time to to read through that because I think it was, is beautifully written but very poignant to a lot of the challenges that, that we all face right now.

So definitely read that if you haven't had the time or get AI to summarize it, if you wanna be truly AI embracing. Annie, thank you so much for taking the time. It is an absolute pleasure to chat with you.

Annie Vella: Thanks Simon. I really enjoyed the chat.

Simon Maple: Excellent. And for those of you listening, please do tune in to the next episode.

Subscribe to our podcasts here

Welcome to the AI Native Dev Podcast, hosted by Guy Podjarny and Simon Maple. If you're a developer or dev leader, join us as we explore and help shape the future of software development in the AI era.

Subscribe to our podcasts here

Welcome to the AI Native Dev Podcast, hosted by Guy Podjarny and Simon Maple. If you're a developer or dev leader, join us as we explore and help shape the future of software development in the AI era.

Subscribe to our podcasts here

Welcome to the AI Native Dev Podcast, hosted by Guy Podjarny and Simon Maple. If you're a developer or dev leader, join us as we explore and help shape the future of software development in the AI era.

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

JOIN US ON

Discord

Come and join the discussion.

Join

JOIN US ON

Discord

Come and join the discussion.

Join

JOIN US ON

Discord

Come and join the discussion.

Join