
Danny Allan
CTO, Snyk

Guy Podjarny
Founder & CEO, Tessl
In this talk, Guy Podjarny and Danny Allan unpack the rapid rise of AI in enterprise software, and the security challenges it brings. They compare AI's trajectory to the early cloud era, warning that enthusiasm shouldn't outpace secure development practices. From IAM vulnerabilities to non-deterministic outputs, Allan outlines key risks and urges adoption of protocols like MCP to mitigate them. Despite concerns, both speakers see AI as an enhancer of human potential, so long as we build with security in mind from day one.
The talk opens with Guy Podjarny discussing the swift integration of AI technologies into enterprise environments, particularly in developer tools like coding assistants and customer support systems such as chatbots. He raises an essential question: Are security concerns hindering this adoption? Danny Allan responds by clarifying that security is "more of a concern than a blocker," with over 80% of Snyk's customers actively embracing AI tools. Allan notes that developers often use these tools even without formal organizational approval, highlighting a widespread enthusiasm for AI in the security community.
Allan illustrates how AI augments human skills, bridging knowledge gaps and fostering curiosity among developers and security professionals. This sentiment echoes throughout the talk, emphasizing AI's role in enhancing rather than replacing human expertise.
Guy Podjarny draws insightful parallels between the current AI adoption phase and the early days of cloud infrastructure. He suggests that while cloud solutions were initially expected to resolve inherent security issues, it was ultimately external platforms that provided layered security solutions. Allan concurs, identifying enduring cloud security challenges—misconfiguration, over-permissive identity and access management (IAM), and lack of logging—as equally relevant to AI adoption.
“We’re setting ourselves up for failure if we don’t consider security going forward,” Allan cautions, highlighting the importance of integrating security into AI development processes early on.
The speakers recognize that organizations often deploy a diverse array of AI models and assistants, rather than standardizing on a single provider. This fragmentation necessitates independent security and governance strategies, as no single AI model can secure every aspect of an organization. Allan warns, “We’re setting ourselves up for failure if we don’t consider security going forward.”
Danny Allan discusses specific security vulnerabilities associated with AI, referencing the OWASP LLM Top 10 vulnerability list. He identifies two primary risks:
To mitigate these issues, Allan highlights efforts to extend protocols like the Model Context Protocol (MCP) for better authorization and identity integration. He expresses optimism that these efforts will improve security in the long term.
Allan addresses misconceptions about AI, downplaying fears like AI “hallucinations” and emphasizing more pressing concerns such as misconfiguration and over-permissive IAM. He advises developers to “validate the packages you’re pulling in” and to leverage AI for testing and validating data flows in their custom code, underscoring the need for automated yet robust security measures.
Guy Podjarny concludes that while AI democratizes software development by accelerating innovation and lowering entry barriers, it also amplifies existing security challenges. Allan emphasizes that the early stages of new technology are often the riskiest due to business pressures to "go fast" without fully understanding or securing the technology.
Both speakers remain optimistic about the future, asserting that while today’s AI-powered software presents real risks, foundational security practices must evolve to keep pace. “The root causes… have been the same for 20 years,” Allan reflects, advocating for building security into AI from the outset to harness its full potential for innovation and progress.