Testing at the boundary between good and evil
Technology like blockchains and AI can be utopian or dystopian. How do we test this - and should we?
Blockchain technology came to the world's attention with Bitcoin and associated cryptocurrencies, but it is about far more than payments. Testers will soon need to rigorously evaluate rnew blockchain applications in fields ranging from supply chain management, automotive, manufacturing, insurance and fintech. But how do we test them - and how far does our responsiblity go?
Before we can begin to test an application built on a blockchain, we need to understand what makes this technology so different from anything that has come before. There is much functional testing that needs to happen before many of these applications are production ready - but along with functional testing comes the responsibility to consider the ethical problems that will occur.
With the recent furore over Facebook, there is a rising public awareness of data privacy and ownership. Public blockchain technology gives us an unprecedented opportunity to confront this and take back control of our own data, using decentralised identity systems under which individuals can choose how much of their data they expose at any given time. Yet the potential for abuse is also unprecedented.
The unique, append-only architecture of a blockchain means that records that are written to a public blockchain are there for perpetuity and cannot be deleted - something that is already causing headaches in the world of GDPR compliance. What does this mean for the right to be forgotten? Or for medical records or criminal records, where data should be able to be deleted on request?
These may seem like philosophical questions rather than software testing questions, but who else will be the gatekeeper in ensuring that this exciting new technology has been used in a responsible way? Is it time to add ethics testing to the armoury of skills we already possess?
It is not just blockchain applications that create these challenges. AI and Machine Learning also pose troubling ethical dilemmas. We are already seeing the beginnings of AI-enhanced automation testing, so as machines do more and more of the automated mechanical checks that used to be such a large part of testing, perhaps it is time for us to re-emphasise the importance of humans in the process, not for checking simple functional requirements but the application and use of the software itself.