How the promise of AI-powered testing is creating new risks, old mistakes, and a massive opportunity for testing to reclaim its strategic value.
For years, the testing industry has chased “liberation” through fewer humans, more automation, and faster delivery. AI in testing is simply the latest version of that empty promise by an industry seemingly intent on destroying its value proposition.
Through this keynote I’ll make the case that AI doesn’t eliminate the need for testing (or testers), it magnifies the consequences of weak testing strategy and tool fetish as policy.
I’ll lay out what I think matters when AI enters the stack: how we validate non-deterministic behavior, how we avoid automation slop, how we keep human judgment in the right places (and admit its limits), and how we build evidence that stands up to regulators, auditors, and reality.
We will explore:
- Metric validity and self-validating systems creating systemic, less visible failures
- How AI-augmented testing can amplify both competitive advantage and regulatory exposure
- The moral hazard of “closed loop” automation without human accountability and the myth of “ethical AI”
- A path forward for the testing business built on integrity, risk management, and principled leadership
So join me as we take a sharp view on the AI in testing business in an era where regulation lags, marketing outpaces evidence, and autonomy is sold as inevitability, and true competitive advantage will belong to organizations that treat testing as a strategic intelligence function.
The “Great Liberation” isn’t freedom from testers; it’s freedom from bullshit.