A practical way to make software quality visible beyond dashboards and isolated metrics.
How do you actually know if the quality of your software is improving?
I’ve seen many teams where the dashboards looked great, fewer bugs, more tests, higher coverage, yet everyone still felt uneasy about releasing. At the same time, I’ve worked on products where the numbers didn’t look impressive, but the software felt stable and predictable.
That gap between metrics and real confidence is what triggered this talk.
We talk about quality all the time, but when someone asks for a status update, we often fall back on numbers that don’t tell the full story. As testers, we constantly form opinions about quality, risk, and confidence, but we rarely make those insights visible in a way others can easily understand.
In this session, I’ll share how I started exploring a different way of looking at quality through something I call the Quality Evolution Tracker. It’s a simple, evolving model that helps teams reflect on how quality changes over time by combining data with human judgement.
This is not a vendor talk. There’s no finished product and no sales pitch. The focus is on lessons learned, mistakes made, and ideas that helped teams have better conversations about quality.
You’ll leave with practical ideas to rethink how you measure and discuss quality in your team, and how to make testing insights more visible and useful for decision-making.