How AI Exposes Your Delivery System

3 min read 468 words
View on Substack

I see people keep claiming that AI-generated code is shifting the constraints from coding to review and validation. Let’s ignore the fact that coding is almost never the constraint and that all agent-assisted development is doing is making that obvious. Their assertion is only partially true if you were bad at Continuous Delivery to begin with.

If you take CD seriously, validation is not a bottleneck. Validation is something you design from the start. From the value hypothesis onward, you define what needs to be validated, and you automate it as a primary part of development. By the time code exists, validation is cheap, fast, and boring. That’s the goal.

If validation suddenly feels expensive, it’s because it was always deferred, manual, or ill-defined — and AI just turned up the volume.

As for manual code review, async code review has always been a worst practice. It’s a laggy, low-signal control masquerading as quality.

Paired programming addressed this constraint by eliminating context switching. The person driving focuses on implementation. The navigator focuses on overall goals. With agents, you are the navigator, not the driver, and context switching disappears.

However, code review is supposed to exist for things that require judgment:

  • Is this a good test?
  • Is the naming meaningful?
  • Does this reflect good domain boundaries?

Everything else should have been automated years ago. Many teams just never did.

I’ve been experimenting with specialist agents cross-checking each other on exactly those judgment-heavy questions. Early results are better than the average human code review I’ve seen. It’s early, and I still need to run this in anger against real enterprise applications, but for most use cases, traditional code review looks like it’s on borrowed time.

The real risk in software delivery has never been “bad code.” It’s the amount of unknowns you push to production in every release. One of the core goals of CD is shrinking that batch size.

If you accelerate coding with AI before widening the pipe of downstream processes, don’t act surprised when the system collapses under the volume of change. You flooded it yourself.

Start improving by finding the problems in production to make sure you don’t fix constraints in the wrong sequence. Production’s main problems are never “we need to code faster”; they're about the information being coded being less garbage, and about processes for detecting problems being slow and expensive.

Never start in the middle.

AI doesn’t change the rules of delivery systems.

It just punishes you faster for ignoring them.