Skip to main content

AI Wrote It. It Looks Perfect. Can You Spot What's Wrong?

25-minute Talk

AI doesn't fail the way we expect. It produces confident, polished, professional-looking mistakes. Your judgment is the only thing standing between "looks great" and a live bug.

Virtual Pass session

Timetable

1:30 p.m. – 2:15 p.m. Thursday 19th

Room

Room E1 - Track 4: Talks

Artificial Intelligence (AI) Test Automation

Audience

Testers, QA engineers, and test leads working with AI tools or about to

Key-Learnings

  • Name the 3 cognitive traps that make AI output hard to question: automation bias, false fluency, and error anchoring.
  • Spot real AI-generated vulnerabilities in live code before the reveal. Practice critical review under realistic conditions.
  • Build a practical habit: know which parts of AI output to trust, and which parts to always verify.

An interactive detective hunt through real AI-generated code. Three rounds. Three cognitive traps. You try to catch the mistakes before they ship.

I asked an AI assistant to write a database query. The code came back clean. Well-structured. It passed my first review. It also had a SQL injection vulnerability that could have exposed an entire user database.

My first reaction wasn't "let me check this carefully." It was "this looks really good."

That's the problem we need to talk about.

AI doesn't fail the way we're trained to catch. It produces confident, well-formatted, professional-looking mistakes. Code that compiles. Output that reads like a senior developer wrote it. And we approve it.

This session is an interactive detective hunt. Real AI-generated code appears on screen. You try to spot what's wrong before the reveal. Three rounds, each harder than the last.

Round 1: SQL injection hiding in clean-looking code. Round 2: hardcoded credentials in plain sight. Round 3 is the one that gets people. Some of the code is actually good. The real skill isn't rejecting AI output wholesale. It's knowing what to trust.

Along the way, we name the traps that make this so hard: automation bias ("the tool generated it, so it's probably fine"), false fluency ("it reads like a pro wrote it"), and error anchoring ("I'll just tweak this and ship it"). I've fallen for all three. Chances are, you have too.

You'll leave with a working mental model for all 3 traps, hands-on experience spotting real vulnerability patterns, and a practical review habit you can use starting Monday.

Related Sessions

There are currently no related sessions listed. Please check back once the program is officially released.