Watch AI Native DevCon on demandWatch AI Native DevCon on YouTube
Logo
  • Articles134
  • Podcast86
  • Devtools Landscape607
  • Events26
  • Newsletter33
  • DevCon
  • Articles134
  • Podcast86
  • Devtools Landscape607
  • Events26
  • Newsletter33
  • DevCon

Get Weekly Insights

Stay up to date with the latest in AI Native Development: insights, real-world experiences, and news from developers and industry leaders.

Email Address*
Full Name
Company
Company Role
We value your privacy. Your email will only be used for updates about AI Native Dev and Tessl.
Logo
  • Discord
  • LinkedIn
  • X
  • YouTube
  • Spotify
  • Apple Podcasts
  • Home
  • Articles
  • Podcast
  • Landscape
  • About
  • Privacy Policy
  • Code of Respect
  • Cookies
  • Contact
© AI Native Dev
Back to articlesAwesome Reviewers turns code-review feedback into reusable, AI-ready prompts 

18 Jul 20255 minute read

Paul Sawers

Freelance tech writer at Tessl, former TechCrunch senior writer covering startups and open source

LinkedIn
X
Substack
AI Tools & Assistants
GitHub & Git
Prompt Engineering
Open Source
Table of Contents
Turning Code Reviews into AI Prompts
From Open Source to AI Training Data
Open Source and Extensible by Design
The Future of AI Code Review Starts Here
Back to articles

Awesome Reviewers turns code-review feedback into reusable, AI-ready prompts 

18 Jul 20255 minute read

Turning Code Reviews into AI Prompts

How do you capture thousands of hours of code review feedback and turn it into reusable, AI‑ready prompts? That’s what the folks at AI code review startup Baz have done with Awesome Reviewers, a public library of prompts distilled from real-world pull request comments across some of the top open source repositories.

Code review feedback is like tribal knowledge — hard-earned lessons that rarely get documented but shape how teams write and maintain quality code. It’s these lessons that Awesome Reviewers captures, converting them into prompts that teach the AI agents increasingly embedded in modern development workflows.

“AI helps us write code faster than ever, but reviewing that code remains a bottleneck,” said Baz co-founder and CEO Guy Eisenkot. “For many engineering teams and open source projects, review is still inconsistent, manual, and repetitive. Standards are enforced every day, often through scattered review comments, institutional knowledge, or buried documentation.”

From Open Source to AI Training Data

Awesome Reviewers draws from more than a thousand open source projects — including Next.js, LangChain, and FastAPI — identifying common review patterns like style corrections, security warnings, and performance improvements. These patterns are abstracted into prompts that can guide AI reviewers to emulate seasoned engineers.

One example comes from the Node.js framework Fastify. A prompt in the library instructs AI systems to check whether configuration options are explicitly declared and clearly documented, with usage examples to prevent runtime errors and improve clarity.

Developers can hit a Copy Prompt button to paste it into AI code review tools that support custom instructions — like Cursor, Claude Code, or Codex. Baz users can also deploy the prompt directly into their workspace using Deploy to Baz, automating the integration entirely.

Open Source and Extensible by Design

Awesome Reviewers isn't tied to Baz. Any tool that supports prompt customization can use the library. But Baz’s own platform makes the workflow smoother, offering seamless integration and prompt management as part of its pull request review engine.

It’s also an open source project, released under the Apache 2.0 license and available on GitHub. Developers can browse the full prompt corpus, build their own integrations, or fork the project to suit internal review processes. Teams could automate workflows that pull prompt data directly from GitHub, feed it into an LLM, and apply checks to every pull request — posting inline comments when the AI detects violations of best practices.

The Future of AI Code Review Starts Here

The current library includes over 470 prompts spanning 15 languages, with a focus on Python, TypeScript, and Go. Eisenkot admits the prompts aren’t perfect — some may be overly narrow or too vague — but emphasizes their grounding in real human review.

Community feedback has been largely positive. Developers appreciate getting helpful suggestions early in the development cycle, and some have already raised ideas to improve the experience — like tighter integrations with editors and review tools.

Like DeepWiki and Context7, Awesome Reviewers transforms unstructured developer knowledge into structured, machine-readable form. And because it’s open source, anyone can build on it. Baz might reserve the smoothest integrations for its platform, but the core value is available to all: code review wisdom, distilled into prompts, ready to teach the next generation of AI developers.

Resources

Visit resource
Baz AI Platform
Visit resource
Next.js Documentation
Visit resource
FastAPI Documentation
Visit resource
LangChain GitHub Repository

Related Articles

Document your developer system prompts

21 Mar 2025

Task Framing: No Need to Beg!

2 Feb 2024

Roast-ing AI workflows with Ruby

25 Jun 2025

Paul Sawers

Freelance tech writer at Tessl, former TechCrunch senior writer covering startups and open source

LinkedIn
X
Substack
AI Tools & Assistants
GitHub & Git
Prompt Engineering
Open Source
Table of Contents
Turning Code Reviews into AI Prompts
From Open Source to AI Training Data
Open Source and Extensible by Design
The Future of AI Code Review Starts Here

Resources

Visit resource
Baz AI Platform
Visit resource
Next.js Documentation
Visit resource
FastAPI Documentation
Visit resource
LangChain GitHub Repository

Related Articles

Document your developer system prompts

21 Mar 2025

Task Framing: No Need to Beg!

2 Feb 2024

Roast-ing AI workflows with Ruby

25 Jun 2025