As AI agents move from writing boilerplate code to performing complex, data-aware tasks, a critical question emerges: How do we prevent a creative but unsupervised AI agent from causing chaos in our databases, where every operation must be precise and predictable? Unchecked, agents pose a dual threat: accidental data corruption through malicious or incorrect Data Manipulation Language (DML) and catastrophic schema changes via unintended Data Definition Language (DDL).
This session provides a practical playbook for putting agents on guardrails. We'll move beyond theory and demonstrate two powerful open-source tools that bring deterministic safety to agent-driven database operations. First, we will show how the MCP Toolbox for Databases can be used as a DML guardrail, preventing agents from executing unsafe queries by providing them with pre-approved, parameterized ""tools"" for data access. Second, we will explore how Atlas can manage and verify DDL changes, ensuring that any agent-proposed schema modifications adhere to company policy and migration best practices. This is a hands-on guide to building robust, production-ready agents that you can actually trust with your data.
Rotem Tamir (40), father of two. Coding since the age of 6 (my first programs were written in QBASIC). Since I started working professionally, I have always been obsessed with making software engineering suck less. Previously CTO @ Ariga, co-creator of atlasgo.io (a Terraform for databases), focused on making working with databases easy, fast and reliable. An avid technical content creator with more than 500k words published, today I help nerdy technical founders with building good businesses.