Have you ever been thrown in the midst of an emergency, to help test a system you don't know? You're given a couple of docs, a few screenshots, and now you have to come up with a detailed test plan. Where do you start?
You try to understand what kind of data the system deals with. Input, configuration, temporary, results - anything. Then you can start defining them for the tests.
The problem is if you get this wrong, you're going to waste a lot of precious time, before you even get to actual testing. Preparing the wrong initialization data, running the system under the wrong configuration, interpreting the wrong results - All these can set you back, with time that you don't have.
In this workshop you're going to put data modeling into practice. You're going to start off from a system description. Given a possible set of tests, you will analyze which data is required, what flags to set, and what to look for as confirming results.
You'll break down data in each step, consider the system's state and suggest the strategy for how to perform these tests, to control the system, and get trustworthy results.