Performance testing isn’t enough
Over its decade-long lifespan, our product has faced ever increasing data volumes - which caused ever increasing performance complaints by our customers. As a result a decision was made that the product will be refactored to a new architecture with the main goal being improving performance.
But how can we prove that this will result in real user perceivable improved performance - and how can we prevent regressions from creeping in? After defining a set of performance critical use cases together with product management I set up a continuous monitoring which tracks their resource usage – as a non-developer.
I’ll start the talk by outlining what is needed to implement performance monitoring.
Secondly, I’ll share a few issues which were found, their causes and fixes:
- A log statement which consumed gigabytes of memory
- A removed deadlock which made everything slower
- A performance degradation which was near impossible to debug.
In the third part I’ll show the unexpected issues which were brought to light by the performance monitoring, like the case where the main functionality of the product broke despite all the automated tests being green.
In the end I’ll share my tips and tricks for everyone who wants to implement something similar and make a case why it’s beneficial to have continuous performance monitoring in place - even when the product you’re working on is an old-school desktop application like ours and not a cloud application.