AI Is a Forcing Function for Developer Discipline
If you don’t understand what good code looks like - and the continuous delivery workflows required to create and ship it - AI is going to be painful.
Not because AI is bad at writing code, but because it is brutally honest about your engineering habits.
AI doesn’t fix sloppy architecture, vague interfaces, or missing documentation. It amplifies them. And it does so at robot speed, while charging you for every token it wastes trying to understand your mess.
Modularity and Documentation Were Never Optional
Today’s lesson is simple: modularity and documentation save time and money.
The best repositories, whether open source or internal, make it easy for developers to onboard and contribute. They do this through clear naming, small modules with single responsibility, tests that describe behavior, and documentation written for developers. The goal is discoverability and communication, not cleverness.
That matters because software development has never been a typing problem. It’s an information supply chain problem. The faster someone can understand a system, the faster they can change it safely.
Enterprise Teams Normalized Waste
Most teams treat good internal documentation and clean modularity as luxuries, as the pressure to deliver more and faster ever increases.
The cost shows up as developers burning time digging through tech debt: unclear names, oversized components, vague interfaces, and missing explanations of how critical pieces fit together. None of this was tracked. It did not appear on a balance sheet. It quietly drained time, morale, and delivery speed.
In legacy development, that waste was bad enough. With AI agents in the loop, it becomes obvious and expensive.
AI Makes Hidden Friction Visible
When you use agents to assist with development, ambiguity expands the context window. More files are pulled in. More rules have to be interpreted. Intent becomes harder to infer.
That reduces agent accuracy and burns tokens faster. The issue is not that AI is hallucinating. The issue is that you handed it a junk drawer and asked it to be precise.
A Concrete Example: Token Economics
I recently built a proof-of-concept for automated code review. The system spawned specialized agents to review generated code for different concerns and suggest fixes.
It worked - and it burned through my Claude Pro tokens in about an hour.
That forced a question most teams aren’t asking yet:
How do I get the most useful work done with the fewest tokens?
The answer wasn’t better prompts. It was better structure.
Optimizing the Repository, Not the Prompt
Claude helped me refactor my configuration and repository layout so agents had less irrelevant information to read for each task, while remaining precise where it mattered. I asked it for the best approach that would work best for Claude, reviewed the suggestion, nd had it do it.
The code was already modular, but it went further by documenting function purpose and interfaces for critical paths and adding a short explanation at the top of each file describing its role. These are practices I have known for years and mostly skipped because they were time-consuming.
It also refactored the Clausde configuration information for more efficient context setting for each change,
Claude did the refactor in about 15 minutes. That investment will save time and tokens on every future change while also making the codebase easier for humans to understand.
Doing this will be added to my “must improve before changing features” list, which I use for any repo I’m touching for the first time.
AI Is a Forcing Function
This is one of the reasons I’m optimistic about AI-assisted development.
AI makes it painful not to do things correctly.
It is a forcing function for developer discipline - much like continuous integration was a forcing function for:
- Small batch sizes
- Frequent integration
- Writing testable code
You used to be able to succeed as a “senior” developer with sloppy processes. Large commits. Vague interfaces. Minimal documentation.
That era is ending.
Small Tasks, Small Commits, Fresh Context
sJust like CI, agent-assisted development requires small, focused tasks and atomic commits, followed by a refreshed context window.
Big AI-driven changes fail for the same reason big human-driven changes fail:
- Too much implicit knowledge
- Too many assumptions
- Too much surface area to reason about
If people are complaining about massive hallucinations and unwanted changes, they’re telling you something important about their architecture and workflow - whether they realize it or not.
Determinism Is the Wrong Complaint
I’m sure the usual “AI only creates garbage” crowd will complain that AI-generated code isn’t deterministic or is just slop. I hope they open a ton of issues in the linked repository, since I’ve literally never typed a line of code in it. It’ll either show me how tightly they clutch their personal style, or I’ll be able to harden my automated code review process even further and keep accelerating how much faster I can deliver value than they can.
That’s engineering.
The Bottom Line
AI doesn’t lower the bar for professional software development. It raises it.
It rewards teams who already understand:
- Modularity
- Clear interfaces
- Testable behavior
- Continuous integration and delivery
And it ruthlessly exposes teams who don’t.
That’s not a bug. That’s progress.
If you’re curious about the patterns Claude used to refactor its configs, check this commit.