Technical Musings: 2026

Sunday, March 29, 2026

The Dangerous Valley of self driving cars

Five Nines

    When I was a teenager in the late 80s I worked as a gopher (As in: Hey kid, go for something I want; a less fancy name for Intern) for the company my Dad worked at.  It was a financial news agency with an office in Manhattan.  They were investigating the latest in Optical Character Recognition (OCR).  They wanted to bypass the expensive data services to get government market information by scanning newspapers.  Unfortunately, if you ever looked at a newspaper with a magnifying glass you would see a large amount of the text was nearly unrecognizable when looked at closely.  Humans have an amazing ability to see word patterns in badly written text that was only matched by early Convoluted Neural Networks decades later (2009).

    So the latest Macs, stacked on desks made from unfinished doors supported by short file cabinets, had horrible accuracy: around 85%.  Which to my young mind, seemed pretty good.  But having to edit 15 words out of a hundred would take much longer than having a human type them up, so the whole idea failed.

    Later, in my career, I first had to deal with Service Level Agreements (SLAs) as a customer, and then as someone who had to makes these agreements come true.  I learned quickly that 99% of the time is not a very impressive stat for any thing that happens more that 100 times.  1 in a 100 is good odds when betting a single race, but a 99% uptime guarantee allows two weeks of downtime over a year.  SLAs are normally specified in how many 9s are in the uptime guarantee: 99.9, 99.99, 99.999, etc.  Five 9s is considered the gold standard for any service; I don't know any service that advertise six 9s.  And five 9s is only reachable when the problems are well understood and the solutions are mature.  It can take decades before a technology can meet that standard.

Uncanny Valleys

    Computer special effects in films started in the 80s and continued to get better and better, and every improvement was met with Wow!  Well, until the early 2000 with films like The Polar Express (2004) when audiences found the dead eyes of the otherwise improved characters to be disturbing.  It was explained at the time that the faces of the characters had gotten so good that audience's minds started to really recognize them as human faces, but the closeness to reality created a dissonance that the earlier, cruder faces did not.

    This "Uncanny Valley" was dealt with by pulling back from realism (better) and using the improved technology to make simpler (worse) cartoon-like images.  The Incredibles (also 2004) was a good example of this.  Worse is better.

Self Driving Cars

    There are other things that follow this same pattern: Not using a technology to it's fullest extent until it is mature can be for the best.  I think self driving cars fit this pattern.  If I let a self driving car that has a 99.99% effectiveness, there would be only a minute and a half each 24 hours when the system would fail.  Going 60mph, you can definitely die in an badly driven car in less than a minute and a half.  It probably wouldn't be a single failure - it would be many small error spread over the whole 24 hour period.  But it wouldn't take more than a few seconds to lead to a crash.  For most people, it would take over a week to drive 24 hours total.  It would be incredibly easy to get complacent and/or bored over a week's time, and in doing so let the almost good.  Instead, if the system was to fail every 5 minutes, you'd be watching like a hawk.  Paradoxically, by failing more often, it would keep a human more alert to it's fallibility, and better prepared to take corrective action.  Worse is better.

Dangerous Valley 

    The stakes are much higher for a millions of vehicles caring millions of people than for people being disturbed by a character's dead eyes.  It's literally life and death.  It's not an Uncanny Valley, it's a Dangerous Valley.

A Modest Proposal

    We should restrict self driving cars to Level 2 (Current driver assist systems) until they can prove they can reach Level 5 (Full driving automation).  The onus should be on the creators of these systems to prove they are safe.  Allow them make cars that have all the cameras and sensors that will be used for self driving, but they can only use them to gather data, not drive the car.  The car companies will have to do something to entice owners to allow this - maybe free satellite radio?  Since they will only be gathering data, the systems can be much cheaper than a real self-driving setup.  Use that data to simulate what the AI would do.  Don't let the computer drive until those simulations show it can handle %99.999 of all real world situations.  Avoid the Dangerous Valley between a human driver forced to pay attention and a car that can safely do the job completely on it's own.

Notes

Airplanes 

Some might think that the autopilots that airplanes have used for years are comparable to self driving cars - but the tolerances are so much greater for airplanes.  They fly hundreds of feet away from each other in dedicated lanes, with professional pilots, and dedicated, professional traffic management.  The commercial air system is an incredibly safe way to travel.  This system that has taken decades to perfect, under heavy government regulation, and huge consequences ($$$) of even a single failure.

Informed Consent

    People are marketed "Full Self Driving" systems (Tesla) that are not.  Tesla does not publish meaningful statistics on their driving automation.  They provide some statistics, but not enough to compare them to human drivers or even previous versions of their software.  Consumers may conflate the systems that are completely computerized and the ones that have human elements (Waymo, Tesla).  A short test drive before buying a 'self driving' car would have a low probability of showing the errors of a greater than %99 (but less than %99.999) effective system.  Are consumers sufficiently informed for consent?