Mindblindness and Self Driving Cars

Safe driving is a social problem, not a physics problem. A mosquito can safely navigate three dimensional space, but the learned skills of neurotypical others-aware driver are required to safely navigate a landscape of human drivers, pedestrians, and similar targets/projectiles. Software, with lots of sensors and short signal paths, can do mosquito. It can't do neurotypical, yet.

I am borderline Aspergers - my brain is optimized for physics, not fraternization. I "solve" social interaction equations by cogitation, not reflex. Both strength and weakness results.

The weaknesses are apparent in quick-reaction situations like driving in the crowded, distraction-drenched environments of city streets. Driving down city streets, I encounter and must make sense of thousands of other people per minute - someone in the lane in front of me deciding to park, another walking into the driving lane to enter their driver's door, pedestrians that can become jaywalkers at any moment. Every other driver ranges between supercapable and alert, to foreign (that is, trained to drive differently), to young, to old, to intoxicated or sleepy or just plain stupid. Maximum safety requires evaluating and classifying every person and object around me, in fractions of a second.

In January 2016, I got in an argument about the near-term safety of self-driving cars. Being Aspy myself, I did not immediately recognize that "J." was also autism-spectrum, to the point of being unaware of his own limitations and mindblindness. Not a good place to start.

The argument started when I suggested (to the group J. was among) David Mindell's "Our Robots, Ourselves". J. was sure it was wrong because he disagreed with the conclusions, and got agitated when I offered evidence of Mindell's experience and credibility.

J. was convinced that Tesla's self-driving cars, operating autonomously, would be safer than human drivers in two years. He dismissed numerous rear-ending accidents in Tesla's experiments as "the other driver's fault". He knew this because he was rear-ended numerous times in the past, before he gave up driving. He was unaware of the need to "connect the dots" - why did he and the Tesla cars have an unusual number of tailgater collisions? Is this about safety, or assigning blame?

I cannot instinctively suss out the intention of other drivers - 95% of people can do better. So I replace "empathy" with my own algorithm, which involves some strategic judgement but not virtual telepathy. Pardon the digression, but it illustrates the point.

I use hundreds of "miniprograms" to stay safe in traffic, and most are high-level strategies, often developed on the spot in response to a surprise. The surest way to stay safe is to not be in traffic - avoid congestion, use quiet slow streets, take the bus, telecommute. All of these require highly-evolved primate-brain strategizing. The existing driving environment evolved around such strategizing, most of it immediate and subconscious for my otherwise-gifted neurotypical friends.

Read The Big Roads by Earl Swift - vast amounts of planning have gone into the design of US interstate highways, making them far safer and better adapted to human strengths and weaknesses than city streets, with adaptations for ice and bad weather. Of course a robot car can navigate freeways, which are optimized for mistake-tolerance. Try robot cars in New York City in a traffic jam after a snowstorm, with piles of dirty snow lining the streets.

I have no doubt that onboard software can solve the physics of driving far faster than I can, though remote software, connected at the speed of light from far away through congested networks, cannot. The United States is more than 50 milliseconds across, and a 120 FPS video feed requires 8 milliseconds to move a frame - both slower than neural reactivity. Our fastest muscular reaction is sound-to-eye-movement, much faster than recognition-to-motor-response. Clever driving-assist software would watch us watching the world, not ignore us.

The human brain is optimized for novelty - normal people seek it. Humans create new brain programs, denovo, in fractions of a second. Software is optimized for constrained and predictable environments - even "learning" software does not learn how to learn - programmers "teach" software in slow, baroque, mistake-laded ways, pecking on keyboards rather than gesturing, pointing, and analogizing.

Proper human-machine system design combines what ordinary people are good at and machines are good at, achieving a blend of the best talents of each. This rarely happens. Most programmers aren't even aware of the mistakes they make, and turn defensive, not detective. There will come a day when aspie-created software exceeds the capabilities of billions of years of evolutionary adaptation of animals to a threat and opportunity filled environment, but that won't happen in two years. J. will have to wait a while for a robot that can drive better than he can. Robots that drive better in challenging environments than unimpaired, properly-trained neurotypical drivers? Decades.

Human-machine combinations can drive far more safely NOW, if we devote a lot of research and development to that. As Mindell points out, this is the way heads-up-displays work in aircraft, and we can evolve those for automotive affordability and capability. The larger effort would be adapting the legal system to reward progress, not attack imperfection. A HUD manufacturer that cut accident rates by half would be punished for the half that remained, under current legal standards, and the cost of those punishments would make the products unaffordable.

Autism spectrum researchers: please study the driving records of autism-spectrum drivers. What kinds of accidents do they have? Interview them. How do they rationalize those accidents?

MindBlindnessCars (last edited 2016-01-24 23:14:19 by KeithLofstrom)