Besides testing that comes from the development process let's challenge to build automated tests from the user's side.
Tester automation engineers, managers, product owners
Within microservices split in different contexts owned by more than 10 product teams and monitoring testing approach, it is not obvious how to answer to a very reasonable question: “how good is our test coverage?” That question also usually brings some follow up questions: “what tests should be added to improve our coverage?” or “we have too much noise coming from our tests already, what tests can be deleted before we would add some more?” or “how should we decide on test schedule & severity?”
All of these questions should have a reasonable response and trying to answer them, we had to ask ourselves: what are we covering with an automated test? It should not be user flows – can be too complex, features – can have very different complexity level, business requirement – usually has a too broad interpretation. Then we came up with an idea to setup user jobs – something concrete and not too open for interpretations or variations.
I believe we successfully implemented the solution by providing the user jobs list along with the severity and test connections in order to have all those answers based on data. I would like to share our experience around implementing the solution: where and when it can be helpful, what processes would be affected, what would be the benefits?
I would be sharing the journey of test coverage estimation within Trustpilot with the benefits along with the downsides of the process.
30-min New Voice Talk
30-min New Voice Talk