
The Rule Maker Pattern: Creating deterministic execution with AI probabilistic generation

AI generation is both amazingly powerful and frustratingly unreliable. Even simple agent guidelines can fail if run repeatedly, making it hard to trust AI for unattended automation that requires consistency and predictable flows.
To solve this, a growing number of tools are embracing what I call “The Rule Maker Pattern”: instead of letting AI directly execute changes, they create rules, recipes, and detection patterns that run deterministically. This separation — probabilistic generation vs. deterministic execution — is what makes AI dependable at scale.
The Pattern Emerges
The AI Native Dev podcast keeps returning to spec-driven development, and the Rule Maker Pattern falls into focus under that lens: specs capture intent; rules operationalize it — executable and testable. You’ll hear this framing recur across recent episodes of the podcast.
When Moderne discusses its approach to code modernization, it doesn’t let an LLM touch the codebase directly. Instead, it uses AI to create OpenRewrite recipes — deterministic transformation rules that can be reviewed, tested, and applied consistently across millions of lines. Matt Biilmann from Netlify describes a similar approach with CodeMod: AI generates recipes, while code changes follow predictable, debuggable paths.
The security world shows similar evolution. Detections.ai uses AI to generate detection rules teams deploy deterministically across infrastructure. These aren't probabilistic "this might be an attack" warnings - they're concrete, testable detection patterns that either match or don't. For years, Snyk has applied this approach to vulnerability detection in code, using AI to create rules that identify security issues with precision and consistency.
This pattern extends beyond code and security. In data, AI creates SQL queries and transformation rules that execute deterministically. For infrastructure, AI generates Terraform configurations and Kubernetes manifests, deterministic recipes for infrastructure state. Probabilistic generation driving deterministic automation.
Code Generation vs Rule Generation
To place the Rule Maker Pattern in context, it helps to compare it with other approaches. Code generation already shows the principle: probabilistic generation creates deterministic execution. Agent-based systems sit at the other extreme, with probabilistic decisions at runtime — each run a roll of the dice.
The Rule Maker Pattern sits somewhere in the middle, producing deterministic artifacts like code, but more constrained, domain-specific, and immediately verifiable.
Why This Pattern Works
The appeal of this approach becomes clear compared to the alternative. Pure AI execution - letting an LLM directly make changes - creates several challenges most production environments can't tolerate:
Unpredictability at Scale: When you run the same LLM prompt twice, you might get different results. That's fine for creative writing, but catastrophic for updating a production database or modifying critical infrastructure. By using AI to generate rules that execute deterministically, you get the same result every time.
Auditability and Compliance: In regulated industries, you need to know exactly what changed and why. A deterministic rule provides a clear audit trail - this specific pattern was applied to produce this specific change. Try explaining to an auditor why the same LLM produced two different financial results.
This pattern also reframes the "human in the loop" problem. Instead of reviewing every AI output, people review the rules that drive them. A security analyst reviews detection rules, not every alert; a developer reviews recipes, not every change. This scales oversight and aligns with how experts naturally think about patterns and transformations
Testability: You can test a rule, and verify it correctly handles sample data and edge cases. But you can't meaningfully test something that behaves differently each time it runs. The rule-based approach lets teams build confidence through traditional testing workflows.
Performance and Cost: Executing a rule is far faster and cheaper than calling an LLM. Once you've generated a recipe for a common transformation, you can apply it millions of times without the latency or cost of repeated calls. This is especially crucial for real-time systems where milliseconds matter.
This also highlights why prompt libraries don’t solve the problem. A saved prompt like`Rewrite our logging to the new API` can yield different diffs as models or context change. A rule, by contrast, is executable: Replace every oldLogger(msg) with newLogger(msg, timeout=5s) and update imports.
When to Use This Pattern
The Rule Maker Pattern shines in specific scenarios:
Repeatable Operations: When you need the same transformation, detection, or analysis across contexts. A code modernization that runs across thousands of microservices. A security rule that monitors millions of events. A data transformation that processes daily batches.
Compliance Requirements: When you need clear audit trails and predictable behavior. Financial calculations, healthcare data processing, or any regulated environment that must explain exactly what happened and why.
Performance-Critical Paths: When the overhead of calling an LLM is too high. Real-time fraud detection, high-frequency trading systems, or scenarios where milliseconds matter.
Team Collaboration: When multiple people need to understand, modify, and maintain the logic. Rules and recipes are version controlled, code reviewed, and incrementally improved by a team.
This pattern is less suitable for:
One-Time Creations: If you're generating unique marketing copy, a custom illustration, or one-off analysis, the overhead of creating a rule doesn't make sense. Just use the AI directly.
Highly Variable Contexts: When every situation is genuinely unique and patterns don't repeat, creating rules adds complexity without benefit.
Exploratory Work: During research or prototyping phases where you're still discovering what patterns exist, direct AI interaction provides more flexibility.
Where This Leads
The Rule Maker Pattern isn't necessarily the future of AI - it's a pragmatic approach that makes AI more valuable today. It bridges the gap between what AI can do (understand patterns, generate solutions), and what production systems need (predictability, auditability, performance).
As AI models become more capable and reliable, we might rely less on this pattern for some use cases. But even with perfect AI, there will still be value in generating reusable, deterministic artifacts rather than making every decision probabilistically at runtime. The pattern respects both the power of AI and the realities of production systems.
The tools that understand this balance - using AI to create rules rather than exeucting directly - are finding success not because they're futuristic, but because they're practical. They're working in production at scale, today, solving real problems for teams. And that's a pattern worth understanding, regardless of what the future holds.