Researchers have the responsibility of making clear the limits of their understanding about technology, including the software that is soon to be deployed in self-driving cars. Just like most people do not want conventional cars with drunken drivers in the vicinity of their beloved ones, I shall give arguments (which complement my previous arguments: here and follow-up here) to eschew self-driving cars as well.
My blog posts on self-driving cars constitute an article that I have written and submitted to journal X for peer review. Fortunately, journal X has granted me the right to keep these posts on-line.
In my opinion, self-driving cars should be tested in well-delimited areas for the rest of the present century, before they are “let loose” on our roads. Unfortunately, given the extremely large investments in technology for fully-autonomous vehicles, it seems that self-driving cars are here to come and stay. If so, then I foresee the following publication in a few decades:
During the early years of research in self-driving cars, considerable progress created among many the strong feeling that a working system was just around the corner. The illusion was created by the fact that a large number of problems were rather readily solved. It was not sufficiently realized that the gap between such output and safety-critical real-time software proper was still enormous, and that the problems solved until then were indeed many but just the simplest ones whereas the “few” remaining problems were the harder ones—very hard indeed.
This passage is my slight modification of what Yehoshua Bar-Hillel wrote in his famous 1960 report `The Present Status of Automatic Translation of Languages' in which he retrospectively scrutinized the field of machine translation . Bar-Hillel's original words also appear in the following philosophical writings of Hubert Dreyfus: What Computers Can't Do  and What Computers Still Can't Do , i.e., books that changed the course of 20th-century research in Artificial Intelligence (AI). As Dreyfus explains, Marvin Minsky, John McCarthy, and other AI researchers implicitly took for granted that common knowledge can be formalized in facts. Following Plato, Gottfried Leibniz, and Alan Turing, many computer scientists blindly assume that a technique must exist for converting any practical activity, such as learning a language or driving a car, into a set of instructions [4, p.74]. Ludwig Wittgenstein, by contrast, opposed a rational metaphysical view to our (now digital) world. In explaining our actions, Wittgenstein would say, we must always
sooner or later fall back on our everyday practices and simply say `this is what we do' or `that's what it is to be a human being' [4, p.56-57].
Wittgenstein and computer scientists like Peter Naur and Michael Jackson would object to the supposition that human behavior can be perfectly replaced by man-made technology [2, 6]. They would disagree with a number of prominent people who have recently raised concerns that AI systems have the potential to demonstrate superintelligence. While a human will circumvent a big rock but not a crumbled-up piece of newspaper lying on the road, a self-driving car will try to drive around both . Observations like these clarify why HAL, the superintelligent computer in 2001: A Space Odyssey, remains science fiction. Why would mankind now all of a sudden be able to develop a HAL that drives on our cities' roads, circumventing pedestrians, bicyclists, and anything else that humans throw at it ?
People in the automotive industry who have patience with my philosophical reflections often end up falling into Dreyfus's trap by saying the following:
Just add a new rule to the car's software to circumvent a "corner case." Keep doing this for each corner case.
An example of a corner case is the crumbled-up piece of newspaper. If we want the car to drive over the newspaper, we need to add extra sensors and functionality to the vehicle and add a new rule to the vehicle's controller. (The extra hardware makes the car more expensive and the additional rule makes the software more complex.)
The problem with this "add a new rule" solution is that the size of the software grows linearly with each rule. Instead, software should be a model of the real world which is much smaller than the real world itself. That is, we want software to be a compressed representation of all the things it entails, not an explicit list of rules.
I am willing to change my position on self-driving cars if good arguments are brought forward, preferably by philosophically-inclined computer scientists. What worries me is that:
- Self-driving cars are already being deployed in the streets in multiple states in the USA (and I think the public has the right to be well informed about what is really going on).
- I have yet to come across a paper written by a self-driving-car advocate who addresses the philosophical issues raised by Dreyfus. If you, the reader, think that Dreyfus's arguments are easily dealt with (in connection with safety-critical software, not Internet apps), then please consider writing a paper on this matter or reply to this blog post.
I am aware that machine learning is part and parcel of self-driving car technology and I would love to learn more about this. But I have also been told by more than one professional that "good old fashioned" rule-based learning is part of vehicle technology too.
 Y. Bar-Hillel. The present status of automatic translation of languages. In F.C. Alt, editor, Advances in Computers, volume 1, pages 91-141. Academic Press, New York and London, 1960.
 H.L. Dreyfus. What Computers Can't Do: The Limits of Articial Intelligence. Harper/Colophon, New York, 1979. Revised edition (the first edition appeared in 1972).