AI coding assistants like Cursor AI and Windsurf are changing how software is built. Tests can now be generated faster than ever. Yet many teams are discovering a new problem: fragile automation, over-generated test suites, and prompts that look clever but fail to produce real confidence.
This talk introduces Vibe Testing—a practical approach to testing in AI-assisted development environments. Instead of relying on prompt tricks, Vibe Testing focuses on execution discipline and thoughtful collaboration between engineers and AI.
The session is structured around the 10 Commandments of Vibe Testing, a set of principles that guide how tests should be designed, scoped, reviewed, and evolved when AI is part of the workflow. Using live demonstrations with Cursor AI and Windsurf, we will explore how to break complex flows into testable intents, decide what should be automated versus observed, manage the context given to AI tools, and validate outcomes before code leaves the editor.
Attendees will see how techniques like agent-assisted exploration, pre-commit testing, and fast feedback loops can be applied in practice—while avoiding common pitfalls such as over-generated tests, flaky assertions, and opaque AI behavior.
Participants will leave with a practical mental model for testing in AI-driven development—one that balances speed, control, and trust while keeping quality firmly in human hands