Eliminate Waste by Implementing Test Impact Analysis

Get rid of long and expensive automated regression and validation test suites. Do you really need all those e2e tests? Are you really sure? Most times, less is more

Test impact analysis is a technique that determines which tests to run based on the impact of source code changes. Let me share with you how we are saving time and money with this approach

Merging a couple of lines of Typescript code changes into our main or epic branches meant running thousands of unit tests and hundreds of Cypress end-to-end tests. The whole build and verification process in our Travis based pipeline ran between 45 to 90 minutes. The team seemed happy with it. This long process assured quality. The product owner was happy. The software manager and developers didn’t complain. But I saw waste and a major block to productivity. Even after adding parallelization, improving performance, and removing unnecessary tests, it was still taking up to 1 hour to get a build. Therefore, as most testers on the planet, I began to swim against the current. I asked difficult questions and noticed a lack of correlation between code changes and tests executed. Developers didn't care about this correlation. But I found it crucial. Our product, an angular based open-source storefront library for SAP Commerce was taking too long to be built. So much infrastructure and computing power being used to test the side effects of a few lines of code in an obscure class with limited functionality. My research led me to the work done by Microsoft about Test Impact Analysis. I investigated further and read many academic papers. My conclusion was that we were using a brute-test strategy instead of a wise-test method. My colleagues knew my temper was not exactly smooth, so when I told them we needed to test less they thought I had finally lost my mind. They knew I had argued, at times passionately, for a thorough and comprehensive regression test suite. The rejection to my idea was swift and not exactly polite. But as any politician worth their salt, I argued back and asked for time and resources to run trials. The team grudgingly agreed. This short talk is about the journey of implementing Test Impact Analysis. I would like to share with you the pains, sorrows, laughter, and joys that come with the wisdom learned in this process. The obvious question was “which tests do we run then”? I noticed we had decided that ALL tests were important and essential. So, we ran them all. The first mental construct that needed reassessment was that. Not all tests are the same. I began the painful task of splitting tests into ‘core’ and ‘supplemental’. Everybody hated the idea. But when I carefully explained the impact of this basic notion and the positive performance improvements in the pipeline, my team began to like me again. Let me share with you what I did wrong. How I learned from my errors. How I used machine learning to use a context-based approach to selecting test cases, and how I was able to reduce our build times from up to 1 hour, to 20 minutes without sacrificing software quality. Most times in life, less is more. Swimming against the current is necessary for survival. All it takes is courage. (SAP Spartacus open source project can be found at https://github.com/SAP/spartacus)

If you like the Agile Testing Days Conference you might also like: