Webinar

The Age of Risky Software

With

Danny Allan

13 May 2025

In this talk, Guy Podjarny and Danny Allan unpack the rapid rise of AI in enterprise software, and the security challenges it brings. They compare AI's trajectory to the early cloud era, warning that enthusiasm shouldn't outpace secure development practices. From IAM vulnerabilities to non-deterministic outputs, Allan outlines key risks and urges adoption of protocols like MCP to mitigate them. Despite concerns, both speakers see AI as an enhancer of human potential, so long as we build with security in mind from day one.

Embracing AI: Opportunities and Security Concerns

The talk opens with Guy Podjarny discussing the swift integration of AI technologies into enterprise environments, particularly in developer tools like coding assistants and customer support systems such as chatbots. He raises an essential question: Are security concerns hindering this adoption? Danny Allan responds by clarifying that security is "more of a concern than a blocker," with over 80% of Snyk's customers actively embracing AI tools. Allan notes that developers often use these tools even without formal organizational approval, highlighting a widespread enthusiasm for AI in the security community.

Allan illustrates how AI augments human skills, bridging knowledge gaps and fostering curiosity among developers and security professionals. This sentiment echoes throughout the talk, emphasizing AI's role in enhancing rather than replacing human expertise.

Drawing Parallels: AI and Cloud Adoption

Guy Podjarny draws insightful parallels between the current AI adoption phase and the early days of cloud infrastructure. He suggests that while cloud solutions were initially expected to resolve inherent security issues, it was ultimately external platforms that provided layered security solutions. Allan concurs, identifying enduring cloud security challenges—misconfiguration, over-permissive identity and access management (IAM), and lack of logging—as equally relevant to AI adoption.

“We’re setting ourselves up for failure if we don’t consider security going forward,” Allan cautions, highlighting the importance of integrating security into AI development processes early on.

The Fragmented Landscape of AI Models

The speakers recognize that organizations often deploy a diverse array of AI models and assistants, rather than standardizing on a single provider. This fragmentation necessitates independent security and governance strategies, as no single AI model can secure every aspect of an organization. Allan warns, “We’re setting ourselves up for failure if we don’t consider security going forward.”

Addressing Technical Risks in AI-Powered Software

Danny Allan discusses specific security vulnerabilities associated with AI, referencing the OWASP LLM Top 10 vulnerability list. He identifies two primary risks:

  1. Identity and Access Management (IAM): Large language models (LLMs) trained on vast datasets pose risks of unauthorized access to sensitive information. Allan notes the complexity introduced when AI agents communicate without robust authorization protocols.

  1. Non-determinism and Auditability: The unpredictable nature of AI-generated code poses compliance and auditing challenges. Allan explains, “Having an audit trail and being able to meet compliance requirements is actually really difficult in the world of AI.”

To mitigate these issues, Allan highlights efforts to extend protocols like the Model Context Protocol (MCP) for better authorization and identity integration. He expresses optimism that these efforts will improve security in the long term.

Debunking Myths and Offering Practical Guidance

Allan addresses misconceptions about AI, downplaying fears like AI “hallucinations” and emphasizing more pressing concerns such as misconfiguration and over-permissive IAM. He advises developers to “validate the packages you’re pulling in” and to leverage AI for testing and validating data flows in their custom code, underscoring the need for automated yet robust security measures.

Conclusion: Securing the Future of AI-Driven Development

Guy Podjarny concludes that while AI democratizes software development by accelerating innovation and lowering entry barriers, it also amplifies existing security challenges. Allan emphasizes that the early stages of new technology are often the riskiest due to business pressures to "go fast" without fully understanding or securing the technology.

Both speakers remain optimistic about the future, asserting that while today’s AI-powered software presents real risks, foundational security practices must evolve to keep pace. “The root causes… have been the same for 20 years,” Allan reflects, advocating for building security into AI from the outset to harness its full potential for innovation and progress.


About The Speaker

Danny Allan

Chief Technology Officer, Snyk

As Chief Technology Officer, Danny is responsible for the global technology roadmap and security research at Snyk. With more than 25 years of software technology experience, he is passionate about solving customer problems and software innovation. Previously, Danny was CTO of Veeam Software where he launched more than a dozen new product offerings and spearheaded the company into the leading market share position. Earlier in his career, Danny was Director of Security Research at IBM and a member of the Security Architecture Board where he co-authored the IBM Secure Engineering Framework. He holds multiple software patents in the cloud and security field.

Subscribe to our podcasts here

Welcome to the AI Native Dev Podcast, hosted by Guy Podjarny and Simon Maple. If you're a developer or dev leader, join us as we explore and help shape the future of software development in the AI era.

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

JOIN US ON

Discord

Come and join the discussion.

Join