Split your api testing into 5 different levels to give you faster feedback and better scenario coverage by making use of technologies like docker, kubernetes and mocking frameworks.
Room F3 - Track 3: Talks
Technical Testers, Developers, Architects.
In my context we run a micro service architecture with a number (300+) of api endpoints both synchronous and asynchronous. Testing these in a shared environment with cross dependencies is both challenging and very necessary to make sure this distributed monolith operates correctly. Traditionally we would test by invoking an endpoint with the relevant query params or payload and then assert the response code or body for valid data / type definitions. This proved to be more and challenging as the push for CI and having common data sources meant dependencies would go up and down per deployment which meant flaky tests.
I will demonstrate how we leveraged of newer technologies and split our api testing into 5 levels to increase our overall confidence. The levels are: (ignoring developer focused unit and unit integration tests)
1. Mocked black box testing – where you start up an api (docker image) identical version to the one that would go to PROD but mock out all its surrounding dependencies. This gives you freedom for any known data permutations and one can simulate network or failure states of those dependencies.
2. Temporary namespaced api in your ci environment – here you start up ur api as it would in a normal integrated env but it’s in a temp space that can be completed destroyed if tests fail… never gets to the deploy stage and no need to roll back if errors/failures occur, here we use kubernetes and ci config to orchestrate these tests. The tests focus is to check 80-20 functionality and confirm that the api will meet all the acceptance criteria.
3. Post deployment tests – usually called smoke testing to verify that an api is up and critical functionality is working in a fully integrated environment.
We should be happy by now right? Fairly happy that api does what it says on the box… but wait there is more…
4. Environment stability tests – tests that run every few min in an integrated env and makes sure all services are highly available given the deployments that have completed successfully. Here we use gitlab to control the scheduling.
5. Data explorer tests – these are tests that run periodically but use some randomization to either generate or extract random data with which to invoke the api with. These sort of tests are crucial for finding those edge cases that are usually missed. Sometimes of low occurrence but generally high risk issues. I wrote a custom data extractor that runs against our DBs to find strange data sets to use as tests data.
I would like to elaborate and demonstrate these layers and execution and how this has changes the way we test and look at APIs. Would also touch on the tooling we use to achieve this and the pros/cons of using this approach.
Full-Day Tutorial (6 hours)
30-minute Talk
30-minute Talk
45-minute Keynote