Podcast

Podcast

CTO of $7B Snyk Talks AI Security, Risky Software & Enterprise Adoption

With

Danny Allan

17 Jun 2025

AI Hallucinations Don't Scare Us
AI Hallucinations Don't Scare Us
AI Hallucinations Don't Scare Us

Episode Description

In this episode of AI Native Dev, Guy Podjarny and Danny Allan unpack how security has reduced just to a concern from a roadblock for devs. On the docket: • Why 80% of Snyk’s enterprise customers are actively using AI tools • Navigating security risks of today and tomorrow • The recurring flaw in every new stack • Why more code means more vulnerabilities AI Native Dev, powered by Tessl and our global dev community, is your go-to podcast for solutions in software development in the age of AI. Tune in as we engage with engineers, founders, and open-source innovators to talk all things AI, security, and development. 

Overview

Background and Introduction

Danny, CTO of Snyk, joins Guy to reflect on decades of experience in security—from early days at Watchfire and IBM to leading engineering at Veeam. Danny started as a pen tester and now oversees AI and security at Snyk, where he’s deeply involved in the evolution of generative AI adoption across software teams.


Enterprise AI Adoption is Surging

Over 80% of Snyk’s enterprise customers are already using AI tools, especially for code generation and chat-based support. Developers are driving adoption rapidly—even using unofficial tools like Cursor, Codium, or Copilot when not formally allowed. Unlike the initial cloud wave, even security teams are embracing AI to build internal automations.


Security Concerns: High Priority, Not a Barrier

Security is the top concern among enterprises adopting AI, but it rarely stops deployment. OWASP’s Top 10 for LLMs (e.g., prompt injection, data leakage) are already becoming outdated as new risks emerge—especially with agent-to-agent communication. Standards bodies can’t keep up with the pace of innovation, and many teams are flying blind.


Core Long-Term Security Risks

Danny highlights three fundamental issues that mirror early cloud adoption:

  1. Over-Permissive Access – There's little control over which agents or users can access what data, especially in systems with multiple agents and models.

  2. Non-Determinism – Since AI outputs can differ with identical inputs, auditability and compliance become nearly impossible without new methods of tracing and testing.

  3. Lack of Logging & Observability – Many teams still don’t track how AI is used, which creates dangerous blind spots.


Fragmentation in Tooling and Models

Most companies aren’t standardizing on one AI tool. Even within Snyk, teams use different coding assistants (OpenAI, Claude, etc.) for different tasks. This heterogeneity creates governance challenges that require security platforms outside the model ecosystem—no single model can fully secure itself.


The Role of Agentic Systems

Enterprises are curious but cautious with autonomous agents. Danny classifies usage into three tiers:

  • Assistants (Copilot, Cursor) – Most popular today.

  • Augmented tools (e.g., Augment) – Gaining traction.

  • Fully autonomous agents (e.g., Devin) – Still mostly experimental.

Due to immaturity in authorization and logging, agent-to-agent systems are risky today and not ready for wide enterprise use.


What Developers Can Do Now

Danny offers two pieces of practical advice:

  • Validate open source packages being pulled into projects.

  • Audit AI-generated code for insecure data flows—AI can assist in writing tests and spotting violations.

These controls help developers move fast safely, reducing the risk of untracked vulnerabilities in generated code.


Debunking Overhyped Risks

  • Hallucinations: Becoming less severe as models pre-test outputs.

  • Data Theft: Real but low-risk in most organizational contexts. Danny argues these are less concerning than core access and misconfiguration issues.


Compliance, Attestation, and Regulation

The industry is beginning to move toward attestation and traceability, especially under pressure from financial services and government compliance mandates. Danny hopes for convergence on standards—rather than fragmented, jurisdiction-specific rules.


The “Age of Risky Software”

We're in the riskiest period of the AI lifecycle—early adoption, minimal understanding, maximum pressure to ship fast. The explosion of code generation expands the developer base and increases the attack surface. But Danny remains optimistic: history shows we eventually build guardrails.

The fundamentals haven’t changed—validate inputs, apply access control, and monitor behavior. The challenge is scaling those principles to a faster, more democratized development paradigm. Without guardrails, AI will repeat the cloud’s early mistakes—just at a larger scale.

Chapters

00:00 Trailer
00:50 Introduction
03:20 Generative AI
05:01 Is security a blocker?
07:15 Highly technical, not good coders
11:01 Multimodal
12:18 Security concerns
19:48 Innovative approaches
20:51 Low risks
23:17 Use AI, embrace AI
26:15 The age of risky software
30:15 Q&A
34:39 Outro

Full Script

[00:00:32] Guy: Danny, uh, thanks a lot for coming in. I know we kind of barged in on a day of recording. I think you've had a whole bunch today, so we'll test you out live here. 

[00:00:45] Danny: That works. I don't usually wear a sport coat, but, uh, you know, just, just for you today, I wore a sport coat.

[00:00:50] Guy: Yeah. You're doing sort of, uh, wearing the, uh, the formal wear is like a, I guess if anybody knows anything about me, it's like definitely sort of in the, in the, uh, uh, Snyk’s original culture. Uh, not so much a suit sort of environment on it, but, uh, I think as we sort of barge in to like protect enterprise, uh, customers on it, it's like a little bit of a showy or serious, uh, lens to it that, uh, that was great.

[00:01:11] Guy: So let me kind of, uh, just introduce you a little bit, you know, people kind of heard too much about me on it, but just to talk a little bit, Danny, about your background. I’ve had the pleasure of knowing Danny, oh wow. I'm sort of, uh, I think a little bit embarrassed and I was like, I think it's not quite 20 years, but it's sort of starting to kind of push that envelope on it, uh, uh, as we were building together some of the original AppSec products back in Sanctum, Watchfire.

[00:01:34] Guy: Uh, and, uh, always, uh, sort of, was always at the cutting edge of sort of understanding where we're building out. Ended up, uh, continuing to lead some, uh, uh, research and security research at IBM after it acquired, uh, Watchfire, which we worked at together. Um, and then through a bunch of sort of startup journeys, uh, uh, before Snyk, was CTO at Veeam Software,

[00:01:55] Guy: talking also about sort of a different type of risk, right? Sort of managing, uh, backup software, uh, in big volume in the world of cloud, which was not always embracing. We'll touch on that in a little bit. Security as a, as a first principle, uh, on it. And then, uh, lastly, you know, we got the, uh, the chance to, uh, draw you in to, uh, to Snyk to, to come and be Snyk’s CTO on it, which is very exciting.

[00:02:17] Guy: So like a long, uh, amazing history that I’ve had the pleasure of sort of, uh, uh, working with you in some parts of it, uh, in the world of, of risk, of security of it. Uh, and today sort of dive in a lot. Um. So, uh, did I get any of that? Uh, sort of, uh, wrong over there, uh, Danny on it. Any key pieces you wanna add?

[00:02:38] Danny: No, that's a hundred percent it. I always like to remind people I started as a pen tester, actually doing security testing, but I don't do very much of that anymore. It, it, uh, it's been a fun journey and, and actually working with you, alongside you over the years has made it even better. So, great to be here.

[00:02:55] Guy: Um, so, uh, I guess we're gonna go from, uh, kind of go through a little bit, but maybe the unpleasant topic of, uh, hey, you can just sort of vibe code your way and sort of not carry some consequences.

[00:03:07] Guy: We're gonna try not to be too much sort of doom and gloom, you know, because I think we're both believers, uh, in it. Um, and, and maybe, maybe just before we cap in, I know you've been sort of delving into, uh, AI today. You're sort of with the security hat, but also developer hat in, uh, in Snyk. Uh, do you wanna say just a couple of words about the type of AI, uh, work, uh, that's been happening that you've been immersed in before we get into the subject matter?

[00:03:34] Danny: I always start the conversation when people ask me about AI, or what is AI? Because I remember doing, uh, you know, cracking hash codes 20 years ago by chaining up a whole bunch of FPGAs. And so machine learning and, and we've evolved into where we are today. So I usually like to start with that. AI for myself, at least when I, when I use the term AI, I tend to think of generative AI, which is more relatively newer in nature.

[00:04:00] Danny:.But as you might expect, we've been playing with all kinds of generative AI here on the Snyk side. Certainly the one that comes up the most is generating code, as I expect most of the audience here is using some kind of generative AI in the code world.

[00:04:14] Guy: Yeah. Yep. Very much so. I, uh, I recently heard, uh.

[00:04:18] Guy: Just like a bit of a snarky, but still kind of true, uh, remark, which is, uh, if it works, we call it ML. If it doesn't work, we call it AI, which was, uh, was kind of a little bit funny, like it's been around for a while. Uh, some stuff about the new frontier. Um, so, uh, I guess, uh, we, we're gonna cover a little bit the security concerns of it, and I guess maybe just to tee that up as, as a question, so,

[00:04:42] Guy: security is, uh, is a real concern. AI is a very kind of volatile world. Uh, and generally the, the, the, the sense I get in conversations is that oftentimes security is a bit of a blocker or a, at least a very kind of source of anxiety, uh, when people are embracing AI. So, uh, I guess my question to you for starters is like, what are you seeing?

[00:05:05] Guy: Are you seeing security sort of, uh, truly blocking organizations and enterprises from embracing, um, uh, AI into the systems? Uh, is it, you know, are they adopting it anyway or is it actually slowing down? What are you seeing?

[00:05:20] Danny: Uh, we're definitely seeing them adopt it. What I would say is unequivocally, uh, people are moving forward with AI.

[00:05:26] Danny: If you ask me kind of in the enterprises, the customers of Snyk, it tends to be, you know, certainly coding assistance. We see a lot of that, but you also see a lot of chatbots. You see a lot of support technologies and, uh, I would say well over 80 percent of our customers are embracing AI within the organization. Now, if you ask me the second question of what is the biggest concern as they're doing that, it is security, but I would say it's not blocking things.

[00:05:51] Danny: It's more of a concern than a blocker in the adoption. In fact, adoption rates are extremely high, especially in the development community.

[00:06:00] Guy: Yeah. And I'm curious, like are you feeling, uh, are the security people you talk to, you know, they wish AI wasn't adopted, or are they also excited to sort of have AI adopted?

[00:06:10] Guy: They're just also afraid of the repercussions?

[00:06:13] Danny: No, no. They're excited by it. The developers I talk to all want to use it. In fact, if the organization doesn't allow it, I would say chances are better than 50 percent for sure that they're using a Windsurf or a, you know, a Codium or a Cursor or a Copilot or something within their environment.

[00:06:28] Danny: So they're definitely excited and, and, uh, energized by it. The one thing I would say is sometimes they, because this comment is now everyone's a developer, sometimes they think, well, really, like, just because you know how to talk to a prompt, does that make you a developer? So there is, you know, there, there's some cynicism there, but they're all adopting it.

[00:06:47] Danny: They're embracing it as well.

[00:06:49] Guy: Yep. And I think, um, I just, one thing I've, uh, I've been excited to sort of see is that actually probably like many people within the security organizations are, uh, are like highly technical but not good coders. Uh, and in that sense, I see many of them actually kind of embracing small tools, uh, the creation of small tools with AI, uh, for their own systems.

[00:07:09] Guy: Not like the way they would identify attackers, but just sort of integration software or automation software of it. Um, which I think is kind of fun. Instead of bringing, if you, if you think harken back to the cloud, if I use that analogy, you know, in cloud, um, uh, I think there was probably more aversion, or my sense was there was more aversion about the security teams.

[00:07:30] Guy: They kind of preferred to not use it. And, and here it feels like they're, they're a little bit of a user, uh, as well. Uh, or an embracer. I don't know if you're sort of seeing that in the world of AppSec, uh, as well.

[00:07:42] Danny: Yeah, I'd say that's definitely the case and, and one of the areas is they're using it for curiosity.

[00:07:46] Danny: I unfortunately began my career, I guess, back in COBOL, and so if you ask me what I'm most familiar with, it’s languages that are older today, or are considered older - C++ and COBOL and, and those, but, and, and Python, 'cause I did a lot of security testing. But for example, I will freely admit I'm not very good at Golang.

[00:08:03] Danny: And so one of the ways I wanted to do something in Go, so, you know, I broke out a coding assistant and I said, “Hey, can you create this?” And so, uh, what I'm seeing in the development community is they're using it to both augment skills in areas that they may not have skills. Um. But they're using it to satiate the curiosity of how do I use this in my everyday life?

[00:08:24] Guy: Yeah. Cool. So I guess, um, um, when, when people are, uh, uh, assessing a security concern, again, like I’m going to probably overdo the cloud analogy over here in this conversation. But, uh, um, you know, when, when the cloud came about and it had a bunch of sort of security, uh, uh, concerns around it, the, the, the solution was sometimes expected to come from the cloud itself, you know, from the platform itself.

[00:08:51] Guy: And over time. I guess we've sort of seen, uh, platforms like Snyk, platforms like for sort of cloud major development or, like Wiz or sort of others for sort of CSPM and such actually be the solutions that layer on top of the cloud. So instead of it being built in, I guess, how many do, do you, do you have a sense of how do people feel about, uh, hey, the models will just become secure or like, you know, these, these sort of code generations, they will kind of fix it on their own versus a need for sort of dedicated security solutions.

[00:09:22] Guy: Sort of above, above the fold.

[00:09:25] Danny: Well, AppSec people definitely believe that it's not going to secure itself, and developers generally are more optimistic and think that it will secure itself. And I, I think that both are true. The analogy to the beginning of, of cloud, GuyPo, is actually super relevant, because if you think about the, when the cloud first started taking off, I mean it was booming 'cause people were saying, Hey, I can get ROA or I can get footprint in an area that I don't have.

[00:09:49] Danny: And if you fast forward after the inception of cloud two or three years, three big things were a result of that. One is. Uh, we're still dealing with the security misconfigurations of storage, right? Open S3 buckets, like no one is doing anything about that. Then there was over-permissive IAM and access controls.

[00:10:09] Danny: And thirdly, there was very poor visibility and logging. And I would say that those same three concepts apply equally to AI. It's very much the same. And, and security people who are aware of this and know this say, this is the era that we're entering into. We're setting ourselves up for failure if we don't consider security going forward.

[00:10:29] Guy:Right? Yeah. And they can anticipate that. And of course, some of the question is, well, what can you do about it? Um, and I think, uh, I guess another, uh, topic we've had in, uh, when, when we think indeed about sort of cloud and about application security is this notion of like transcending fragmentation, right?

[00:10:44] Guy: Of, uh, uh, if you're using one model, maybe you have the aspiration that it would sort of secure itself, you know, all those concerns. But when you're using multiple models, you know, do you, do you kind of expect each of them to just sort of secure itself, or do you need something that's overarching? I guess when you're looking at enterprises today, are you seeing them generally like, you know, pick a partner and kind of go all in with them?

[00:11:06] Guy: Or are you sort of seeing a breadth of tool and, and kind of model adoption?

[00:11:12] Danny: Definitely a breadth of model and tool adoption. Even if I'll, I'll use our organization as an example. We have multiple coding assistants that are being used here, and we're using multimodal things, and we use, you know, OpenAI for one thing.

[00:11:25] Danny: We use Claude 3.7 for another thing. And, and so that is what I would say is typical. And so if you want governance across all the different tools, across all the different models, then you need something that is outside of those environments. And, well, some models are better than others at securing things.

[00:11:43] Danny: I would say there are zero models that can secure everything. And so having some awareness of the types of weaknesses, I think, is a pretty important thing for most organizations to focus on.

[00:11:53] Guy: Yeah, so that's an excellent tee-up, yeah, precisely to my next question, which is, let's sort of get a little bit more sort of technical.

[00:11:57] Guy: So like, what are the security concerns that people raise, uh, the most as you, as you sort of engage with them, and especially when it comes to sort of AI development, right? And, uh, in this world. Do you mind sort of saying a little bit about the OWASP TOP 10, sort of the developer audience doesn't necessarily, uh, not everybody knows what those are.

[00:12:34] Danny: Sure. Yeah. So the open web application security project has always had a top 10 of vulnerabilities.

[00:12:39] Danny: And, uh, in 2023, which two years ago was a lifetime ago, but it was two years ago, they published a top 10 vulnerability list for LLMs. Um, the first one of those was prompt injection, and it went on from there to model poisoning and data theft and data exfiltration. You can Google it and find what those top 10 are.

[00:13:00] Danny: I would say even in the two years since that came to pass, I would say those vulnerabilities tend to be out of date because we're already facing new vulnerabilities. When you have an agent talking to an agent, there is no concept of the vulnerabilities associated with that. Um, so it's evolving, and it's evolving very quickly.

[00:13:19] Danny: So much so that the standards bodies and, and organizations like OWASP are, it's, it's hard to keep up with those vulnerabilities. Some of them though, I would argue, are more real than others. In other words, I think some are overhyped, but, but some are more real, and they'll have longevity too.

[00:13:35] Guy: So let's dig into that.

[00:13:36] Guy: Maybe like, you know, a couple of examples of like, uh, which ones do you think we, we'll start with the, uh, the, uh, the real ones, and then we'll sort of talk a little bit about the ones blown outta proportion. So like, which concerns do you find? Um. Maybe if you named your own sort of top three, uh, that someone, we have an audience here of developers who are sort of building software, like which top three concerns them or their organization should really kind of, uh, focus on.

[00:14:01] Danny: There are two that I personally worry about a lot in the long term. One is around identity and access management. And the reason I say that is 'cause typically LLMs, the large language models, are trained with all kinds of data. And it's not true that everyone inside the organization should have access to all kinds of data.

[00:14:18] Danny: And so you hear this being used as jailbreaking the system or getting data that you're not supposed to have access to. That is exacerbated tenfold when you have agents talking to agents, which we're beginning to see now, where you have some agent using model context protocol to speak to another agent, and there's no concept of passing on authorization.

[00:14:36] Danny: Obviously, authentication has been done, but authorization — the second model. So I worry about that a lot. In the concept of code, let me give you a specific example of this. If, if you have an LLM that has all of your internal code libraries, you may have code libraries for one group that you don't want exposed to another group.

[00:14:52] Danny: Well, how do you ensure that your secondary agent, your coding assistant, knows that I can access this library set, but not this library set? How do you pass that on? And so that's probably one of my top concerns — around data access. Um, right. The second thing is the non-deterministic nature. Uh, as a security person, I'm always kind of very linear in thinking of if I put in this particular, you know, payload, I can exploit the system and get it to do something that I, I don't want it to do.

[00:15:22] Danny: By the very definition, AI and code generation of AI is non-deterministic, and so having an audit trail and being able to meet compliance requirements is actually really difficult in a world of AI.

[00:15:34] Guy: Yeah. Very interesting kind of the, uh, the notion of attacks. I wonder if it's like a new pattern of attackers in which like, uh, you know, if it, uh, if you don't like the weather, wait, uh, you know, wait five minutes.

[00:15:44] Guy: Uh, type element is like, Hey, it feels like if this SQL injection didn't work, just try it again in five minutes on it. Maybe a new version of the AI came along, and maybe that one is vulnerable. Um.

[00:15:53] Danny: Yeah. And it manifests itself in different ways because if you're doing red teaming, like if you're doing SQL/sequential prompt injection, you might do the exact same sequential prompt injection a second time and it works.

[00:16:03] Danny: And the first time it didn't just simply because of the non-deterministic nature. So that could give you an inflated sense of, of confidence and, 

Guy: Right.

Danny: and that's a problem in a deterministic security world.

[00:16:14] Guy: Yeah. Yeah, absolutely. Because you can't really attest to anything having, kinda, uh, some, some longevity, uh, to it.

[00:16:22] Guy: And how, um, I, I guess how are people, uh, uh, dealing? So let's sort of focus on those two. I guess the first one is, I think, more of a constraint, right? I, I very much relate to like identity access and, uh, or, you know, instead of me sort of narrowing it down, let's talk a little bit about those two cases and,

[00:16:39] Guy:maybe what are some sort of mitigating, uh, controls or what can someone do except for don't use, uh uh, AI, right, or don’t use LLMs.

[00:16:50] Danny: Well, I never recommend that. I'm a big believer in embracing it. But on the, the authorization authentication — mostly authorization, less authentication — um, so we're beginning to see it worked into some of the underlying frameworks. So you're beginning to see extensions of MCP, model context protocol, to have authorization in there.

[00:17:08] Danny: And in fact, we're a member of the Consortium for Secure AI. So you're beginning to see these frameworks — the A2A framework's another one. Google announced that at Google Next, a month ago or so, and at the time didn’t have any authorization in it, but already the working groups are coming together and saying, Hey, let’s introduce it.

[00:17:22] Danny: So my hope is — and Microsoft has embraced that — so now you have two giants both embracing the same concepts. So my belief is that in the longer term, um, we’ll start to work in authorization and identity and access management into the systems. Um.

[00:17:39] Guy: It's like I said, uh, you wait for the industry standards to firm up.

[00:17:43] Guy: You participate in them if you are, uh, you know, like if you carry it out and are able to do so. Um, like I said, in the meantime, you sort of avoid topics that require them because they're just not quite yet.

[00:17:56] Danny: Yeah, I would be very hesitant to do a lot around agent-to-agent communication or agentic-to-agentic systems because of that. Give it six months, give it a year, and I think we're okay.

[00:18:07] Danny: That's not to say don't embrace AI right now. I'd just be more, right, cautious when you get into the meta-layer of, of talking across.

[00:18:14] Guy: Right. Yeah. And I think that's okay. Like, risk and reward should always be kind of, uh, uh, assessed on it. And I guess on the, on the second problem of dynamic code and the code kind of constantly changing

[00:18:25] Guy: First of all, like, I think, I think they probably realize, like, it, it's especially true for agentic systems that might sort of change the code a bit more, uh, uh, dramatically, shall we say. So before, let me kind of do a bit of a side tangent. We'll come back to this risk. Like, I, I guess, uh, how much do you see agentic

[00:18:42] Guy:  Uh, kind of full on, you know, Devin styles of agentic systems being used, uh, in enterprise? Like, are they still in like, oh, this is really cool, I want to kind of try it out in the corner, or are you sort of seeing any sort of, um, uh, real adoption of it in enterprises?

[00:18:58] Danny: What I've seen — and this is the segment of the customers maybe that we're dealing with

[00:19:01] Danny: What I've seen is, uh, this is cool, let me try it out in the corner. I, I kind of group them into three buckets, Guy. I have the highly agentic autonomous ones like Devin. You have kind of the medium ones like Augment that, you know, can do a lot of things, but they're not fully autonomous. And then you have more of the assistant, you know, the Cursor, Copilot type style.

[00:19:21] Danny: I'd say very few customers, within our customer base at least, are going fully agentic. Um, I would say most of them are actually in that lower autonomous model with the Copilots and Cursors, and maybe 20 ish percent, 30 ish percent are in kind of the middle that are doing very much more augmented, uh, code assistance.

[00:19:40] Guy: Okay. Interesting. So I guess kind of coming back to the sort of dynamic nature of it or sort of, you know, like how, how do you deal with compliance, I guess, uh, you know, how do you, how do you kind of pass that, uh, I guess what, what — are there any, any kind of innovative, uh, approaches that you're sort of seeing? Like, how are people dealing with that?

[00:19:59] Danny: Well, a lot of logging and visibility. I said in the early days of the cloud that it had these three problems, and the third one was that there wasn't a lot of logging and visibility. And the good news is people are logging and actually getting interesting metrics out of it. I, I met with a large financial institution, a customer of ours, that said 11 percent of our code is AI generated.

[00:20:16] Danny: And I kind of scratched my head and I said, how do you get to 11 percent? But what they were doing was they were logging all the code that's being submitted into their source control management systems, and then they were logging all the code that was being generated, and they were doing some simple mathematics and division on that.

[00:20:31] Danny: So, you know, whether that's accurate or not, for me the positive outcome is that they were logging it, they were actually tracking it. Um, and I think we'll see more of that as, as we go forward.

[00:20:42] Guy: Yeah, yeah. Makes sense. Uh, I guess maybe let's do some, uh, some myth busting, you know, uh, of it. So I guess which, uh, which security concern or two do you feel, uh, is, is

[00:20:52] Guy: You can theorize about it, but it's slightly blown outta proportion.

[00:20:56] Danny: I tend to think the hallucinations are pretty blown out of proportion. I'm not saying it doesn't happen. I just think that over time we're gonna see fewer hallucinations. And the reason why we'll see that is mostly because the models will, will test themselves before it returns output.

[00:21:10] Danny: And so I think, um, hallucinations would be one of them. Identity and access management, I think we will get out of that problem over time. I think data theft is very low — I'm not saying it's not real, it's definitely real — but if you ask me to rate it on a risk category, I think the likelihood of that being exploited is very low.

[00:21:31] Danny: And so, you know, I always measure things on a risk scale as opposed to an actual security scale. And so while some of these things are real, I just think they're lower risk than other areas.

[00:21:41] Guy: Right. So they might be sort of short term risk. And I guess, which, which, um, when you think about, you know, fundamentals, like as you, you in general, when you think about sort of GenAI and these tools, you know, some flaws are just, you know, the baby steps, right?

[00:21:56] Guy: It's just immature. Yeah. It's sort of evolving so rapidly. And some are maybe a bit more fundamental. Um, I guess, which, which security concern or two do you feel kind of have legs here? And they're gonna sort of stay, they'll, they'll hurt us for a while or they'll just be a concern that we really need to, uh, uh, to, to fundamentally sort of consider, 'cause it's gonna, it's gonna be here.

[00:22:18] Danny: I, I would go back to the same three things that I said about the cloud, and that's because we don't learn our lessons. In other words, I think that the configuration of AI is a problem and will continue to be a problem generally around the permissive nature and what data it's trained on and all of that.

[00:22:33] Danny: The second thing being over permissive identity access management to the data, where we're giving customers access to — let's say I'm training an LLM and I put all of my support data in there. Do I really want to do that if I'm giving customers access to it? Because they could break out of their set of data.

[00:22:49] Danny: So I think that's the second one. And then, uh, the third one, I would go back to the visibility and logging. I think we're gonna fall into the same trap that we did before, that we're not doing sufficient visibility and logging, and we end up in the same place we were with cloud after two or three years.

[00:23:05] Guy: Yeah. Of not knowing. Not knowing what is where and, and how is it happening. I guess the, the, the other lens, I, I, I think sometimes about, uh, some, some security concerns are, uh, due to the nature of the new technology and others are, um, uh, they're, they're just, they're just a, um, an amplification of patterns that we've already seen before, right?

[00:23:28] Guy: So like cloud accelerated development or like DevOps accelerated it, but like Agile came before that and, you know, over time, like just sort of things moved at a faster and faster pace. Um, uh, and, you know, similarly maybe like, uh, yeah, ease of creation improves and as ease of creation improves, like, responsible — like the level of scrutiny or maybe the level of proficiency or, uh, professionalism, uh, involved

[00:23:54] Guy: in watching these things — that goes. And that's true also like when I create images. Like I, I create images in a very nonprofessional way, and that's okay. I'm not gonna like build any sort of a business around my kind of AI image creation. I guess, do, do, do you think, does that distinction work for you?

[00:24:10] Guy: And, and if you had to choose like, which, which path you think is carrier you're, it sounds like you're sort of maybe leaning more towards the how we use it versus the tech, but I know if I'm, (the voice is fading out here)

[00:24:20] Danny: Yeah. Well, I'm a big believer in that we should use it. Like I, I am not a slowdown at all. We should use AI, embrace AI.

[00:24:28] Danny: I guess my, I always fundamentally go back to we need to build in security from the very beginning. The problem that we have historically fallen into every time we embrace a new infrastructure, it does not matter whether it is cloud or servers or virtualization or AI, we forget the fundamentals, which are validate your input and code your output.

[00:24:47] Danny: Apply identity and access control management. Like, it, it, it's funny, while, while the threats are, or the attacks are somewhat different because the infrastructure is different, the root causes for them Guy are, it's been the same for 20 years. I mean, we never write App Shield, we were testing the input that was going to an application.

[00:25:03] Danny: Right?

[00:25:05] Guy:  Yeah. Yeah. A lot of the, sort of the same, uh, fundamentals. I do find, for what it's worth, that the, I find myself almost more concerned about the explosion of, uh, of, uh, of usage, uh, than, uh, than maybe even the fundamentals of it, which are much more sort of interesting. But, uh, uh, so we've kind of seen how security hygiene is the, uh, is eventually the things that organizations actually get breached before.

[00:25:32] Danny: Yeah. What is different today than 20 years ago is there's far more data behind these systems, so the risk is greater if compromise occurs. 20 years ago, there wasn't the same amount of data, and now all of the data is behind there, and if the wrong person gets access to it, that can be very detrimental.

[00:25:48] Guy: Excellent. Uh, so we have time flies here. We sort of have five more minutes. I've got a few more questions for you, and then we have some time for a Q and A. So I just wanted to remind everybody that you can post questions in the comments and we'll have a few minutes for Q and A, uh, uh, in a few minutes on it.

[00:26:02] Guy: Um, I guess, uh, uh, we're, we're gonna try and kind of go dystopian utopian over here a little bit. So like we called this talk the Age of Risky Software. Um, let's talk a little bit about like, what, what is, what is the concern here? What is the, um, you know, the path we need to be aware of or, or, is there even one, right?

[00:26:22] Guy: Is sort of, is it set to fall, like what do we mean by that?

[00:26:26] Danny: The age of risky software, in the early inception days in the new technology change, is always the riskiest because the pressure on the business, and I'm sure this is true of everyone in the audience, is use AI, use AI, be more productive, go fast, go fast, go fast. And so the early days are the greatest risk.

[00:26:44 Danny: And we are in the early days of AI, we don't understand it. I was listening to a podcast this weekend and if you said Dave and Buster’s in a voice message, the voice message never got delivered on an iPhone. Still true, by the way, today, if you,

[00:26:58] Guy: Yeah, persona non grata type words that are, uh, uh, uh, mysteriously avoided.

[00:27:04] Guy: Yeah.

[00:27:05] Danny: And we don't understand how the AI is working sometimes. And so we're in this age of risky software because we're in this go fast, go fast, go fast without truly understanding how it's working, how to secure it, doing all the right things. And so we're in the age of risky software.

[00:27:21] Guy: Yeah, I think that makes sense.

[00:27:23] Guy: I do wonder, maybe coming back to sort of the uh, uh, overused sort of cloud analogy we've been doing here, is when about the adoption of the cloud. The initial adoption was similar, like just as you were saying, you know, like pressure to adopt. People used it and it kind of was sort of this like world of possibilities and we kind of went from a place like we, we forget this, but infrastructure used to be a fairly secure thing, you know, it's like people, like the servers and all of that.

[00:27:48] Guy: They were sort of run by a fairly small, very professional, typically very security aware group that was moving way too slow, which is the reason for the drive. And cloud kind of opened it up and suddenly people could do it faster and more people could do it, and oftentimes without any real understanding of

[00:28:04] Guy: what it is that they're performing. And, you know, fast forward not that many years after, and until today, as you point out, like misconfigurations and sort of flaws in infrastructure and cloud infrastructure typically are such a massive, maybe the top cause for breaches.

[00:28:25] Guy: I guess I wonder if we enable everyone to be a creator which I love. You know, like as a, that notion that to be a developer of sorts, of a type of developer, like a creator of software, um, along lines a alongside allowing existing developers to sort of run it like a much, much faster speed, maybe with a bit less, uh, kind of a scrutiny on it.

[00:28:48] Guy: Uh, like our vulnerabilities in code, the, the, the equivalent. Are they gonna be sort of our Achilles heel later? And I don’t know, does that resonate? Is that a conversation? I guess that uh, and, and are we doomed to sort of just accept it if it is, because it is what it is that tech will be adopted.

[00:29:07] Danny: I'm an optimist. Hey, I believe in the long run it will sort itself out.

[00:29:10] Danny: But you know, you've titled this Age of Risky software and I think that's why I. That is the case. Like what we're doing is democratizing coding, uh, clearly. Um, and so we're going fast and we're opening up the aperture similar to what the cloud did to more and more people. And so the result is more code and more code means more vulnerabilities. Right?

[00:29:30] Danny: Now, over time, I think we'll start to implement the controls in the guardrails to, to secure that. Um, and I also don't, that's why you're still gonna have developers just because more people can do development that doesn't eliminate the need for developers. My son's actually in computer science and he said, dad, should I be worried about, you know, there there not being a need for developers in the future?

[00:29:50] Danny: And I say it. That's ridiculous. Of course there's gonna need to be developers. What we're really doing is opening the aperture to more development. And when we do that, what we need are the security guardrails in place to ensure that we don't end up in a similar position the cloud ended up in. Right, right.

[00:30:07] Guy: So, uh, Michael Wolf asks, uh, uh, uh, he made a comment on passing authorization authentication around when agents, uh, uh, uh, uh, talk to agents that sounds parallel to passing around ownership and trading value for intellectual property or micropayments.

[00:30:43] Guy: Do you see security and ownership sort of in a similar lens? I guess this is to me more about like, what, what are the sort of the, uh, the, the assets and the validity of them that are moving around?

[00:30:53] Danny: Uh, I see the problem as similar, but I see them as very distinct problems. The ownership of intellectual property and with AI is a real issue, and I don't wanna minimize it.

[00:31:03] Danny: It's true for music, it's true for art, it's true for code, it's true for language and books and all of this. I can, I can go tell AI to write me a book. Well, we should get paid on that. I, I, I'm not sure the answer to that. I think there is a secondary issue that is similar, but a different problem that has to do with should they get access to that music arts, you know, text code, right?

[00:31:25] Danny: Whatever it happens to be. So parallel but distinct. The, the reason why the micropayments is important is if we want the proliferation of AI to continue, then there needs to be some model of attribution, because otherwise you're gonna lose all the people that are contributing data into the LMS or the diffusion models that are behind AI.

[00:31:44] Guy: Yeah, no, I think, uh, I think well said on it. And it's interesting to also think a little bit about the, uh, data assets that get moved around with, um, uh, protection on top of them. So that's also interesting. In that case, maybe they do blend a little bit more as a, almost like DRM style, like you pass along content with some, uh, security aspects to it.

[00:32:02] Guy: Um, guess another question is, uh, how, uh, how will the role of, uh, attestation play, uh, play out in terms of sort of AI agents and sort of supply chain like when we think about creation with AI and such. Jason asked this, you have a view?

[00:32:18] Danny: Well, here's one where I, yeah, I, I actually think the government, we're gonna see compliance step in and usually I'm not a big fan of, uh, of regulation, of, of new technologies because it's very difficult, especially for startups to deal with it.

[00:32:33] Danny: But I think because of current compliance requirements, if you look at PCI DSS or any of these, you know, regulations or compliance requirements. They require validation of certain things. And so that is going to require attestation of AI native software that is backed by LLMs. And I think this is gonna solve itself.

[00:32:53] Danny: My only worry and concern Guy is that it becomes very fragmented. What you don't want is a GDPR for every country in the world or every state in the United States. And like that's a bad position to be in. So hopefully we consolidate around some very common well thought out, uh. Requirements or regulations around attestation.

[00:33:11] Guy: Yeah. Yeah, that makes sense. But it also makes sense kind of at the meta level, which is AI will be unpredictable, but there's no reason for it not to be traceable. Might not know what will happen if you wanna do that. Uh, I think I've got two more questions. I think we really kind have time for one. Uh, so, uh, one was about the scope of AI security.

[00:33:28] Guy: I'm gonna pass on that. Uh, sorry about that. It's just, I think it too big for. Uh, to capture here, I’ll ask maybe something a bit more concrete, which is, which security checks, uh, can developers currently build into their agents to try and mitigate as many risks as possible of ending on a very practical bit of advice.

[00:33:48] Danny: Yeah.

[00:33:48] Danny: Well, I would say two things. One is if you're using open source components, and actually one of the things that worries me about AI coding assistants, by the way, is that it replaces to some extent open source components and tries to custom create them. Because what's the downstrip patching and updating of all those open source components, that's a problem.

[00:34:04] Danny: Um, but I would say two things. One is to validate the packages that you're pulling in for validation and then code that you're writing, it's very easy. You can use AI actually to test the data flow across that code and to generate a fix for it. And so if you ask me very practically, what I think you should do is validate your open source packages.

[00:34:22] Danny: And if you're writing custom code, validate that the flow of that code isn't breaking the build in other areas.

[00:34:28] Guy: Yeah. Yeah, yeah. I love that. And, uh, and I guess, you know, the more you automate that, the faster you can run. Right? And people kind of forget that putting guardrails, you know, it sounds like a, uh, a cumbersome thing, but actually when you put the guardrails in, when you're sort of running on a bridge with rails, you can run faster 'cause you're not afraid to fall.

[00:34:44] Guy: And, and, uh, so we put those in, um, yeah. Time flies. We're sort of, uh, here we are sort of, uh, 35 minutes sort of, uh, later on. Danny, uh, huge thanks again for coming in and sharing this knowledge and for, uh, I guess investing in, uh, in AI security. 

Danny: Oh, thank you for having me, Guy. It's a great conversation.

Subscribe to our podcasts here

Welcome to the AI Native Dev Podcast, hosted by Guy Podjarny and Simon Maple. If you're a developer or dev leader, join us as we explore and help shape the future of software development in the AI era.

Subscribe to our podcasts here

Welcome to the AI Native Dev Podcast, hosted by Guy Podjarny and Simon Maple. If you're a developer or dev leader, join us as we explore and help shape the future of software development in the AI era.

Subscribe to our podcasts here

Welcome to the AI Native Dev Podcast, hosted by Guy Podjarny and Simon Maple. If you're a developer or dev leader, join us as we explore and help shape the future of software development in the AI era.

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

JOIN US ON

Discord

Come and join the discussion.

Join

JOIN US ON

Discord

Come and join the discussion.

Join

JOIN US ON

Discord

Come and join the discussion.

Join