NOV. 3 – 8, 2019
POTSDAM, GERMANY

EUROPE'S GREATEST AGILE SOFTWARE TESTING FESTIVAL!

5 levels of api automation

Split your api testing into 5 different levels to give you faster feedback and better scenario coverage by making use of technologies like docker, kubernetes and mocking frameworks.

In my context we run a micro service architecture with a number (300+) of api endpoints both synchronous and asynchronous. Testing these in a shared environment with cross dependencies is both challenging and very necessary to make sure this distributed monolith operates correctly. Traditionally we would test by invoking an endpoint with the relevant query params or payload and then assert the response code or body for valid data / type definitions. This proved to be more and challenging as the push for CI and having common data sources meant dependencies would go up and down per deployment which meant flaky tests.

I will demonstrate how we leveraged of newer technologies and split our api testing into 5 levels to increase our overall confidence. The levels are: (ignoring developer focused unit and unit integration tests)

1. Mocked black box testing – where you start up an api (docker image) identical version to the one that would go to PROD but mock out all its surrounding dependencies. This gives you freedom for any known data permutations and one can simulate network or failure states of those dependencies.

2. Temporary namespaced api in your ci environment – here you start up ur api as it would in a normal integrated env but it’s in a temp space that can be completed destroyed if tests fail… never gets to the deploy stage and no need to roll back if errors/failures occur, here we use kubernetes and ci config to orchestrate these tests. The tests focus is to check 80-20 functionality and confirm that the api will meet all the acceptance criteria.

3. Post deployment tests – usually called smoke testing to verify that an api is up and critical functionality is working in a fully integrated environment.

We should be happy by now right? Fairly happy that api does what it says on the box… but wait there is more…

4. Environment stability tests – tests that run every few min in an integrated env and makes sure all services are highly available given the deployments that have completed successfully. Here we use gitlab to control the scheduling.

5. Data explorer tests – these are tests that run periodically but use some randomization to either generate or extract random data with which to invoke the api with. These sort of tests are crucial for finding those edge cases that are usually missed. Sometimes of low occurrence but generally high risk issues. I wrote a custom data extractor that runs against our DBs to find strange data sets to use as tests data.

I would like to elaborate and demonstrate these layers and execution and how this has changes the way we test and look at APIs. Would also touch on the tooling we use to achieve this and the pros/cons of using this approach.


More Related Sessions


30-minute New Voice Talk

11:55-12:25 Room F1 - Track 1: New Voices

The Synthetic Monitoring Maturity Model

30-minute Talk

16:10-16:40 Room F2 - Track 2: Talks

A quick guide to implement ATDD in Agile Teams-A Case Study

15-min Consensus Talk

14:25-14:40 Room F1 - Track 1: Consensus Talks

Make UI test automation easier, faster and more robust

30-minute Talk

11:10-11:40 Room F2 - Track 2: Talks

Hunting for Bad Data

Other Events:

Your privacy matters

We use cookies to understand how you use our site and to give you the best experience on our website. If you continue to use this site we will assume that you are happy with it and accept our use of cookies, Privacy Policy and Terms of Use.