"Low priority" is a technical label. It is not a moral one. What testers can do when they find themselves standing between ranking tickets and an inner moral fight?
A bug was filed. Diabetes management app. Critical prio: a normal value that looks like a medical emergency: reading of 5.2 mmol/L rendering as 5.2 mg/dL. Someone changed it to low priority, because: "Work around exists."
Three months later, a support ticket arrives. A user traveled internationally. Her phone region switched automatically. The app followed silently. She trusted the reading. She collapsed three hours later.
This talk is not about what happened in the hospital. It is about what happened in the meeting — where a room full of reasonable people looked at a documented risk and moved on.
The session is structured around three questions David could not answer in that room.
I. What does "low priority" actually mean? We unpack the gap between technical severity and human impact, and introduce a dual severity model that changes the prioritisation conversation before it ends badly.
II. Why didn't he push harder? We examine the accountability mismatch, the weight of release pressure, and three practical tools: asking for decisions to be documented, framing risk in human impact rather than release dates, and making the decision-maker explicitly own the call.
III. What does "ethical" mean here? The bug was filed. The workaround existed. The decision was made above his pay grade. This section draws the line between compliance and responsibility.
"Workaround exists" is not the same as "no one gets hurt."