News

News

Can AI keep its commitments? Key Takeaways from Guy Podjarny’s AI Native DevCon Keynote

16 May 2025

Patrick Debois

Can AI keep its commitments? A review of Guy Podjarny's AI Native DevCon Keynote.
Can AI keep its commitments? A review of Guy Podjarny's AI Native DevCon Keynote.
Can AI keep its commitments? A review of Guy Podjarny's AI Native DevCon Keynote.

From Code Suggestions to Commitments

AI and coding have come a long way. From simple code completion to agents writing complete codebases asynchronously. The human factor is still there, we have to review what AI produces. This manual review is putting the brakes on the potential speedup we could achieve. Yet, inherent to the technology, the stochastic nature keeps it from being an AI slot machine.

In his keynote at the AI native developer conference, Guy Podjarny (Founder/CEO of Tessl) proposes that AI should start giving us commitments similar to humans within organizations and, in the broadest sense, society. As a team or individuals we are committed to delivering certain functionality and quality towards our peers and customers.

Can We Trust AI to Deliver?

Now, how can AI give us commitments? Every time we challenge the results of the current AI technology, the answer is often just “add this one more thing,” and it will work better. From better prompts to code base indexing, documentation chunking, reasoning, and now MCP tools, the progress of quality has been amazing, yet the manual aspect of reviewing stays—unless you are vibecoding, of course.

The question, however, is how can we trust autonomous approval? Some tools already have an auto-commit feature with some heuristics to determine how certain it is. Other tools provide checkpointing for easy rollback to a specific point in time of the generation. Additionally, tools can learn from our manual assessments and turn this into knowledge for future more automated reviews.

When you already practice specification-driven development, you have AI generate the features not from a single prompt but from more structured requirements. Of course, similar to testing, you need to provide good examples for it to generate good and complete tests. It feels strange to have AI generate its own tests, but we could use a diversity of models, tools and agents to improve that. But if all linters, security scanners pass, and the code passes all tests it generated, are we in the clear?

Promise Theory and the Path Ahead

These are all steps in the right direction, says Guy, but he agrees there is no perfect solution yet. This reminds me of Promise Theory, where we think broadly in a system of agents making “promises” to other agents. The word *promise* was chosen deliberately, as we have to assume things can fail. Also, we can’t really make promises on behalf of others, so it’s our job to control as much as we can. 

In an ecosystem where there is one central control (a superagent), we know it is challenging to keep all promises, and innovation will slow down. Guy refers to this as the big brain versus the open ecosystem. This openness is key to keep improving and learning from each other. The final question he asks all of us: Are you ready to commit to it? A clear call to action to all vendors and engineers in the space.

Join the Discord!

Join the Discord!

Join the Discord!

Datadog CEO Olivier Pomel on AI Security, Trust, and the Future of Observability

Visit the podcasts page

Datadog CEO Olivier Pomel on AI Security, Trust, and the Future of Observability

Visit the podcasts page

Listen and watch
our podcasts

Datadog CEO Olivier Pomel on AI Security, Trust, and the Future of Observability

Visit the podcasts page

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

JOIN US ON

Discord

Come and join the discussion.

Join

JOIN US ON

Discord

Come and join the discussion.

Join

JOIN US ON

Discord

Come and join the discussion.

Join