News

News

Pattern #1: From Producer to Manager

2 Apr 2025

Patrick Debois

AI produces the code, you review it

It all started with simple code autocompletion in our IDEs. But the real question became: is it suggesting the right code? As developers, we still had to verify everything the AI generated. Fast forward a few years, and we now have chat interfaces suggesting full code snippets. Of course, we still need to review them before pasting anything into our code window.

Chat became multiline suggestions, a single file became multiple files, all the way to the complete scaffolding of applications. The trend is clear: the more is generated by AI, the bigger the pull requests become. The time saved on generating the code now moved towards time reviewing the code, similar to reviewing a colleague's code. 


Cognitive load reduction

To do a good code review, you need to be able to step into the context of the running system and understand the changes. If AI continues to be so prolific, we need our tools to help reduce the cognitive load of each review. Here a few examples of how:

  • A code diff gets enhanced by contextual annotations. It helps us focus on the code change at hand instead of showing us a regular red/green diff. 

  • Breaking down the review into smaller reviewing tasks or files so we don’t get overloaded with seeing all changes at once.

  • Visualizing the impact of change as a diagram makes it easier to understand.

Our future IDE will change and adapt itself to the review task at hand, a principle captured by the concept of a Moldable Development environment.


Acceptance fatigue

It’s tempting to just accept what the AI generates due to the volume of reviews. Especially when we don’t understand exactly what it did, and we are in a hurry to finish all code reviews. This is where acceptance fatigue kicks in: if a system is reliable enough, we tend to assume it will always be good.

Experiments have shown that developers are more likely to accept the code on the weekend. And junior developers tend to accept code they don’t understand. This is captured by the Ironies of Automation even when AI is doing all the work, we, first of all, need to know what good looks like,  keep vigilant and train so we still understand good code. In addition to that, writing tests is instrumental in helping us capture and pinpoint issues, much like the goal of TDD.


Situational awareness

Current tooling is mostly focused on generating code and tests. The first signs of AI auto-committing and possibly auto-deploying to production are there. Obviously we’ll have tests to act as brakes to catch issues, but still, issues can happen.

This brings up another cognitive load issue, understanding the system when it fails: we need our observability tools and all the situational awareness to decide what to do about an issue. AI can inform us of possible causes and suggest different solutions. But this will be hard if we don't write or review the code first to build that understanding.

Similar to reviewing code before it goes live, we need to do the same with issues and bugs. The review tools will help us understand and streamline the debugging process. And maybe instead of sending it off to the second line for review and solutions, the agents will already have tried some solutions and report back their findings. The streamlining of this process is currently not very mature and still pretty much feels like a siloed process. As AI observability and incident response tools become more integrated into the code generation process, this space will continue to mature.


The job you never knew you had

Making decisions, telling what good looks like, listening to advice from others and being responsible: these are all traits of being a manager. Knowing when to trust, when to delegate to either humans or Agents it comes back again to knowing enough what good looks like. Much like a manager, you architect the system and organisation to make it safe for experimentation, and you set out the rules.

Or as Aurélien Pelletier commented: 

This sounds a lot like the role of a maintainer on an open-source project. All the best practices for that probably apply here too. Except the AI always agrees and won't fork the project ;)

Congratulations, we're all managers now!

Join the Discord!

Join the Discord!

Join the Discord!

Datadog CEO Olivier Pomel on AI Security, Trust, and the Future of Observability

Visit the podcasts page

Datadog CEO Olivier Pomel on AI Security, Trust, and the Future of Observability

Visit the podcasts page

Listen and watch
our podcasts

Datadog CEO Olivier Pomel on AI Security, Trust, and the Future of Observability

Visit the podcasts page

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

JOIN US ON

Discord

Come and join the discussion.

Join

JOIN US ON

Discord

Come and join the discussion.

Join

JOIN US ON

Discord

Come and join the discussion.

Join