Self Driving cars? No
Because AI is becoming capable of doing what is very difficult for humans (logic, accounting, playing chess, indexing the web), we presume that tasks easy for humans (visual/acoustic pattern recognition, movement tracking and response) must be easy for computers. Not even remotely true.
Your brain is primarily a sight/sound/tactile processor, with a powerful social computer and a very weak logic processor. It is easy to duplicate and exceed the recently-evolved logic processor. The social processor evolved over a million years, and the movement processor over a billion years. The movement processor is so optimized (down to the molecular level) that the general task is mindbogglingly complex, well beyond the technological state of the art.
I talked with a friend about a "skyrail" system to lower cars into the flow of freeway traffic, and pluck them back out again - a very artificial and constrained driving task, though still somewhat beyond current AI capabilities. The long exit-free westbound stretch of I-84 from 181st to 47th in Portland would be a great place to try this. Tokyo might be better - more geeks, fewer lawyers. Broadly applied, this technology would eliminate the need for high self-powered onramp acceleration, huge high horsepower engines, and all the weight and pollution that entails.
The general driving task - looking for anomalies in the road, predicting the behavior of other drivers, pedestrians, animals, etc. - involves rapidly building a dynamic world model with human social knowledge. A ball bounces into the road - slam on the brakes, there may be a kid chasing it. A pine cone bounces into the freeway - slam on the brakes and risk getting rear ended? Probably not. That is a huge social calculation, on top of powerful visual computing, performed in milliseconds. We invented controlled-access freeways to intentionally remove most of the anomalies, permitting safe operation at high speed.
Still, every accident and low-vision weather condition creates dangerous anomalies, like the half-a-dozen trucks rear-ending each other on I-84 near Baker. Oregon in mid-January 2015. Those were attentive, skilled, professional drivers on a proper freeway; what would Google do?
Google can't even do searches right - lately, they seem to be doing template matching rather than logical winnowing of their gigantic corpus. There's no way children will be safe from a Google-controlled template-driven car, even with fast frame rates and human-scale visual pattern processing milliseconds away. Human drivers are not perfect, of course, but most drivers avoid many frightening accidents far more often than they fail.
What computers CAN add to driving is "virtual omniscience", information about the road ahead that drivers don't have. Route planning and congestion observation is easy, congestion prediction is possible. Debris on the road from an accident three days ago? Diff to the scene from a week ago, magnify the anomalies, and present them on the heads-up display to the driver, while notifying the highway maintenance crew to clean up the dangerous stuff (at night, when the roads are empty but the stuff is hard to see).
It is annoying that programmers spend so much time attempting to bypass excellent human pattern recognition and doing a lousy job, when there are tasks that people are very poor at which can be automated easily and profitably. Humans, plus tools that compensate for weaknesses, can be an incredibly powerful combination, far more capable and cost-effective than humans or computers alone.
The Wide Range of Human Competence
Impaired and incompetent drivers cause most accidents. While an "average" Google Car might be better than an "average" driver someday, there is no such thing as an average driver; instead, a range of abilities. If 20% of drivers cause 80% of accidents, then yes, let a robot drive them if it is twice as good as "average". But do not replace the 80% of drivers who are better than the robot.
Let's put some (fake) numbers on that. Assume two groups of drivers, 200 bad drivers and 800 good drivers. A bad driver has an accident every year; a good driver has an accident every 16 years. So, there are 250 accidents a year, 200 of them caused by bad drivers, 50 by good drivers, and the average rate of accidents is 0.25 per year per driver. If the robot car can manage 0.125 accidents per car per year, we've saved 125 accidents a year. Is that the best we can do?
No, we can do better if we only replace the bad drivers with robots. The bad-driver-with-robot total drops from 200 to 25, and the good driver 50 stays 50, because good drivers are better than robots. 75 (instead of 125 robot-only) accidents per year, at 20% of the fleet conversion cost.
Optimizing Humans and Robots Together
As Marvin Minsky's writings suggest, Google is not in the AI business, they are in the pattern discovery business. Intelligence builds classes of models on neural hardware optimized to run predictive models of the physical and social universe. A collection of CPUs and memory are far more general purpose, and less likely to gravitate towards the physical and social "subset" the human brain is good at. The driving environment is constructed to match human capabilities and weaknesses - its patterns are optimized for human recognition, not robotic pattern discovery.
As Google's corporate success demonstrates, robotic pattern discovery provides useful information to humans. A self-driving car sharing the road with human drivers is socially opaque. A human driver reveals intent through their car's jittering acceleration and steering, and by head and body movements if the driver is visible (I hesitate to cross a sidewalk if a right-turning driver does not look at me).
A Google car does none of that; that is a poor but repairable design decision. Google's self driving-cars have suffered many rear-end collisions, some of which could have been avoided if the following driver was better informed. Why not inform that driver, with warning lights or diplays? In February, a google car in Mountain View attempted to shift one lane right to bypass an obstacle - into the path of a VTA transit bus. That was partly the result of sloppy Google software, but the VTA bus driver could have stopped in time if he knew what the car was attempting to do. Why not use an LED display to inform other drivers? This would be simple and inexpensive.
Displays inside human-driven cars could help those drivers avoid accidents and optimize driving as well.
Either-OR is for simpletons.