Why human-centric testing matters more in the age of AI
For over a decade, the dominant narrative in software quality has been clear: automate everything, reduce manual testing, and scale through code. That model may have worked when systems were deterministic, requirements were stable, and behavior was predictable. But that world is gone. Modern systems are increasingly distributed, rapidly evolving, and now powered by AI components that introduce non-deterministic behavior and ambiguous correctness. In this context, traditional automation strategies begin to break down. Not because they’re wrong, but because they’re incomplete.
This talk reframes “manual testing” as what it has always been at its best: exploratory, hypothesis-driven analysis. An activity uniquely suited to navigating uncertainty, surfacing emergent risks, and making sense of systems that don’t behave the same way twice.
Drawing on real-world examples, I’ll examine where automation fails in AI-driven and complex systems, and why human-led exploration is becoming more critical. We’ll also explore how teams can intentionally design for both deterministic verification and exploratory discovery, instead of treating them as competing approaches. Attendees will leave with a clearer mental model for modern quality: not manual vs. automated testing, but known vs. unknown, deterministic vs. exploratory, and how to build systems that account for both.