
CTO of $7B Snyk Talks AI Security, Risky Software & Enterprise Adoption
Also available on
Chapters
In this episode
In this episode of AI Native Dev, Guy Podjarny and Danny Allan unpack how security has reduced just to a concern from a roadblock for devs.
On the docket:
• Why 80% of Snyk’s enterprise customers are actively using AI tools
• Navigating security risks of today and tomorrow
• The recurring flaw in every new stack
• Why more code means more vulnerabilities
Background and Introduction
Danny, CTO of Snyk, joins Guy to reflect on decades of experience in security—from early days at Watchfire and IBM to leading engineering at Veeam. Danny started as a pen tester and now oversees AI and security at Snyk, where he’s deeply involved in the evolution of generative AI adoption across software teams.
Enterprise AI Adoption is Surging
Over 80% of Snyk’s enterprise customers are already using AI tools, especially for code generation and chat-based support. Developers are driving adoption rapidly—even using unofficial tools like Cursor, Codium, or Copilot when not formally allowed. Unlike the initial cloud wave, even security teams are embracing AI to build internal automations.
Security Concerns: High Priority, Not a Barrier
Security is the top concern among enterprises adopting AI, but it rarely stops deployment. OWASP’s Top 10 for LLMs (e.g., prompt injection, data leakage) are already becoming outdated as new risks emerge—especially with agent-to-agent communication. Standards bodies can’t keep up with the pace of innovation, and many teams are flying blind.
Core Long-Term Security Risks
Danny highlights three fundamental issues that mirror early cloud adoption:
Over-Permissive Access – There's little control over which agents or users can access what data, especially in systems with multiple agents and models.
Non-Determinism – Since AI outputs can differ with identical inputs, auditability and compliance become nearly impossible without new methods of tracing and testing.
Lack of Logging & Observability – Many teams still don’t track how AI is used, which creates dangerous blind spots.
Fragmentation in Tooling and Models
Most companies aren’t standardizing on one AI tool. Even within Snyk, teams use different coding assistants (OpenAI, Claude, etc.) for different tasks. This heterogeneity creates governance challenges that require security platforms outside the model ecosystem—no single model can fully secure itself.
The Role of Agentic Systems
Enterprises are curious but cautious with autonomous agents. Danny classifies usage into three tiers:
Assistants (Copilot, Cursor) – Most popular today.
Augmented tools (e.g., Augment) – Gaining traction.
Fully autonomous agents (e.g., Devin) – Still mostly experimental.
Due to immaturity in authorization and logging, agent-to-agent systems are risky today and not ready for wide enterprise use.
What Developers Can Do Now
Danny offers two pieces of practical advice:
Validate open source packages being pulled into projects.
Audit AI-generated code for insecure data flows—AI can assist in writing tests and spotting violations.
These controls help developers move fast safely, reducing the risk of untracked vulnerabilities in generated code.
Debunking Overhyped Risks
Hallucinations: Becoming less severe as models pre-test outputs.
Data Theft: Real but low-risk in most organizational contexts. Danny argues these are less concerning than core access and misconfiguration issues.
Compliance, Attestation, and Regulation
The industry is beginning to move toward attestation and traceability, especially under pressure from financial services and government compliance mandates. Danny hopes for convergence on standards—rather than fragmented, jurisdiction-specific rules.
The “Age of Risky Software”
We're in the riskiest period of the AI lifecycle—early adoption, minimal understanding, maximum pressure to ship fast. The explosion of code generation expands the developer base and increases the attack surface. But Danny remains optimistic: history shows we eventually build guardrails.
The fundamentals haven’t changed—validate inputs, apply access control, and monitor behavior. The challenge is scaling those principles to a faster, more democratized development paradigm. Without guardrails, AI will repeat the cloud’s early mistakes—just at a larger scale.