Mindblindness and Self Driving Cars


Safe driving is a social problem, not a physics problem. A mosquito can safely navigate three dimensional space, but the learned skills of neurotypical others-aware driver are required to safely navigate a landscape of human drivers, pedestrians, and similar targets/projectiles. Software, with lots of sensors and short signal paths, can do mosquito. It can't do neurotypical, yet.

I am borderline Aspergers - my brain is optimized for physics, not fraternization. I "solve" social interaction equations by cogitation, not reflex. Both strength and weakness results.

The weaknesses are apparent in quick-reaction situations like driving in the crowded, distraction-drenched environments of city streets. Driving down city streets, I encounter and must make sense of thousands of other people per minute - someone in the lane in front of me deciding to park, another walking into the driving lane to enter their driver's door, pedestrians that can become jaywalkers at any moment. Every other driver ranges between supercapable and alert, to foreign (AKA trained to drive differently), to young, to old, to intoxicated or sleepy or just plain stupid. Maximum safety requires evaluating and classifying every person and object around me, in fractions of a second.

In January 2016, I got in an argument about the near-term safety of self-driving cars. Being Aspy myself, I did not immediately recognize that "J." was also autism-spectrum, to the point of being unaware of his own limitations and mindblindness. Not a good place to start.

The argument started when I suggested (to the group J. was among) reading David Mindell's "Our Robots, Ourselves". J. was sure it was wrong because he disagreed with the conclusions, and got agitated when I offered evidence of Mindell's experience and credibility.

J. was convinced that Tesla's self-driving cars, operating autonomously, would be safer than human drivers in two years. He dismissed numerous rear-ending accidents in Tesla's experiments as "the other driver's fault". He knew this because he was rear-ended numerous times in the past, before he gave up driving. He was unaware of the need to "connect the dots" - why did he and the Tesla cars have an unusual number of tailgater collisions? Is this about safety, or assigning blame?

I cannot instinctively suss out the intention of other drivers - 95% of people can do better. So I replace "empathy" with my own algorithm, which involves some strategic judgement but not virtual telepathy. Pardon the digression, but it illustrates the point.

I use hundreds of "miniprograms" to stay safe in traffic, and most are high-level strategies, often developed on the spot for a surprise. The surest way to stay safe is to not be in traffic - avoid congestion, use quiet slow streets, take the bus, telecommute. All of these require highly-evolved primate-brain strategizing. The existing driving environment evolved around such strategizing.

Read The Big Roads by Earl Swift - vast amounts of planning have gone into the design of US interstate highways, making them far safer and better adapted to human strengths and weaknesses than city streets, with adaptations for ice and bad weather. Of course a robot car can navigate freeways, which are optimized for mistake-tonerance. Try robot cars in New York City in a traffic jam after a snowstorm, with piles of dirty snow lining the streets.

I have no doubt that onboard software can solve the physics of driving far faster than I can, though remote software, connected at the speed of light from far away through congested networks, cannot. The United States is more than 50 milliseconds across, and a 120 FPS video feed requires 8 milliseconds to move a frame - both slower than neural reactivity. Our fastest muscular reaction is sound-to-eye-movement, much faster than recognition-to-motor-response, and clever driving-assist software would watch us watching the world, not ignore us.

The human brain is optimized for novelty - normal people seek it. Humans create new brain programs, denovo, in fractions of a second. Software is optimized for constrained and predictable environments - even "learning" software does not learn how to learn - programmers "teach" software in slow, baroque, mistake-laded ways, pecking on keyboards rather than gesturing, pointing, and analogizing.

Proper human-machine system design combines what ordinary people are good at and machines are good at, achieving a blend of the best talents of each. This rarely happens, most programmers aren't even aware of the mistakes they make. There will come a day when aspie-created software exceeds the capabilities of billions of years of evolutionary adaptation of animals to a threat and opportunity filled environment, but that won't happen in two years. J. will have to wait a while for a robot that can drive better than he can.

Suggested study: Autism spectrum researchers, please study the driving records of autism-spectrum drivers. What kinds of accidents do they have? Interview them. How do they rationalize those accidents?