A cautionary tale about real-world testing needs for highly automated products (such as driverless cars).
I "tested" Waymo on a short ride visiting colleagues last Fall. "Be careful, you break everything!" I got in "The Wrong Waymo" which drove me in circles. How can all of the following be true? <extremely funny photos of unfolding events>
1-Waymo has no foolproof way to prevent riders from taking the wrong one (& all look identical) 2-Support couldn't re-route the car after trying 3-I learned of the edge case from neighbors whose ride got swapped. They noticed the destination and fixed it. I didn't have glasses on, and hit "Start Ride." 4-The 3rd time I arrived, was approached by ppl saying I should get out. 5-Waymo video onboarding says {ride won’t take the route you would. We calculate against vast data, emergency vehicles etc}. This creates a human response of acceptance when going the wrong way.... 6-My phone died, I had to ask strangers for a charger... ended in an elderly woman's house. A human-impact tale ends in a truly human way.
*Previously*: Waymos couldn't park in their own lot due to congestion, spending all night honking at each other. This made nat'l news before complaints forced a fix, which took days.*After* my "kidnapping" a neighborhood cat was killed by a Waymo. This also made news (Rolling Stones' covg was most complete).
How many crimes can one robocar commit? When human "test" drivers are present in the driver's seat, the accident statistics are worse. Police and the general public are now doing Waymo's real-world QA.