An interactive detective hunt through real AI-generated code. Three rounds. Three cognitive traps. You try to catch the mistakes before they ship.
I asked an AI assistant to write a database query. The code came back clean. Well-structured. It passed my first review. It also had a SQL injection vulnerability that could have exposed an entire user database.
My first reaction wasn't "let me check this carefully." It was "this looks really good."
That's the problem we need to talk about.
AI doesn't fail the way we're trained to catch. It produces confident, well-formatted, professional-looking mistakes. Code that compiles. Output that reads like a senior developer wrote it. And we approve it.
This session is an interactive detective hunt. Real AI-generated code appears on screen. You try to spot what's wrong before the reveal. Three rounds, each harder than the last.
Round 1: SQL injection hiding in clean-looking code. Round 2: hardcoded credentials in plain sight. Round 3 is the one that gets people. Some of the code is actually good. The real skill isn't rejecting AI output wholesale. It's knowing what to trust.
Along the way, we name the traps that make this so hard: automation bias ("the tool generated it, so it's probably fine"), false fluency ("it reads like a pro wrote it"), and error anchoring ("I'll just tweak this and ship it"). I've fallen for all three. Chances are, you have too.
You'll leave with a working mental model for all 3 traps, hands-on experience spotting real vulnerability patterns, and a practical review habit you can use starting Monday.