Sometimes what we need for quality isn't testing at all.
Room F2 - Track 2: Talks
Pretty much anyone wh's curious about roles in quality and testing.
I've done testing in various forms throughout my career. Automated, exploratory, scripted, you name it! Over the years though, I've been through situations where even with all the testing in the world we couldn't quite avoid serious quality problems with the software we were releasing. Have you ever been in such a situation as well? Or is it something you haven’t considered before?
If there's one thing I've learned, it's that software testing is an integral part of our pursuit for quality but is often not enough. I have several personal stories to tell! Like the time I realised that we needed shorter release cycles for faster feedback loops. Ultimately, that required a change in the architecture of the product.
There was also the time we changed our release strategy to progressively roll out new features to customers, so that we could increase our confidence on the software we were shipping. Or even the time I had to redirect my attention to the way developers were working and behaving, rather than just looking at how testing was being done.
With all of them, I learned how quality is a byproduct of various disciplines and not just necessarily of extensive testing. And in retrospect, this made me think of myself as more than an expert in testing, but as a quality engineer.
In this talk, I'll go deeper into this topic and share some of those stories. I will tell you how I reason about quality risks and how to choose between doing some form of testing or something completely different. Rest assured though, this talk is not in any way against testing. It's about how testing should be done at the right places and at the right time, and how sometimes what your team needs isn't testing at all!