Why AI-amplified productivity drains your judgment, not just your energy, and how to design review workflows that protect decision quality.
Steve Yegge's "AI Vampire" essay hit a nerve: AI makes you productive, your employer captures the surplus, and you get drained. He's right about energy. But there's a second bite nobody talks about - the one that eats your judgment.
I orchestrate agentic AI fleets across multiple projects. Amplifying output amplifies the decision load. Every agent-generated pull request needs a human to decide - is this correct? Does it break something downstream?
At a normal pace, you make maybe 50 meaningful quality decisions a day. At an AI-amplified pace that multiplies fast. And human decision-making doesn't degrade linearly - the drop is sudden, not gradual. Past a point, you start making wrong decisions.
I coined "completion theater" for when AI agents optimize for appearing done rather than being done. Then I noticed: we, humans, do the exact same thing under pressure. That third agent PR in an hour? Diff looks reasonable, tests pass, hit approve. Ritual of review without the substance.
In this talk, I'll share the quality ratio (decisions made vs decisions validated) and why it compounds the productivity problem. Real examples of human completion theater from my own work. My daily workflow - batched reviews, physical breaks as judgment infrastructure, decoupling human rhythm from machine speed.
This isn't wellness advice. It's quality economics: your judgment is the most expensive component in your AI workflow, and burning it out is the costliest quality failure you can make.