Submitted by egdaylight on
Dated:
Researchers have the responsibility of making clear the limits of their understanding about technology, including the software that is soon to be deployed in self-driving cars. Just like most people do not want conventional cars with drunken drivers in the vicinity of their beloved ones, I shall give arguments (which complement my previous arguments: here and follow-up here) to eschew self-driving cars as well.
My blog posts on self-driving cars constitute an article that I have written and submitted to journal X for peer review. Fortunately, journal X has granted me the right to keep these posts on-line.
In my opinion, self-driving cars should be tested in well-delimited areas for the rest of the present century, before they are “let loose” on our roads. Unfortunately, given the extremely large investments in technology for fully-autonomous vehicles, it seems that self-driving cars are here to come and stay. If so, then I foresee the following publication in a few decades:
During the early years of research in self-driving cars, considerable progress created among many the strong feeling that a working system was just around the corner. The illusion was created by the fact that a large number of problems were rather readily solved. It was not sufficiently realized that the gap between such output and safety-critical real-time software proper was still enormous, and that the problems solved until then were indeed many but just the simplest ones whereas the “few” remaining problems were the harder ones—very hard indeed.
This passage is my slight modification of what Yehoshua Bar-Hillel wrote in his famous 1960 report `The Present Status of Automatic Translation of Languages' in which he retrospectively scrutinized the field of machine translation [1]. Bar-Hillel's original words also appear in the following philosophical writings of Hubert Dreyfus: What Computers Can't Do [3] and What Computers Still Can't Do [4], i.e., books that changed the course of 20th-century research in Artificial Intelligence (AI). As Dreyfus explains, Marvin Minsky, John McCarthy, and other AI researchers implicitly took for granted that common knowledge can be formalized in facts. Following Plato, Gottfried Leibniz, and Alan Turing, many computer scientists blindly assume that a technique must exist for converting any practical activity, such as learning a language or driving a car, into a set of instructions [4, p.74]. Ludwig Wittgenstein, by contrast, opposed a rational metaphysical view to our (now digital) world. In explaining our actions, Wittgenstein would say, we must always
sooner or later fall back on our everyday practices and simply say `this is what we do' or `that's what it is to be a human being' [4, p.56-57].
Wittgenstein and computer scientists like Peter Naur and Michael Jackson would object to the supposition that human behavior can be perfectly replaced by man-made technology [2, 6]. They would disagree with a number of prominent people who have recently raised concerns that AI systems have the potential to demonstrate superintelligence. While a human will circumvent a big rock but not a crumbled-up piece of newspaper lying on the road, a self-driving car will try to drive around both [5]. Observations like these clarify why HAL, the superintelligent computer in 2001: A Space Odyssey, remains science fiction. Why would mankind now all of a sudden be able to develop a HAL that drives on our cities' roads, circumventing pedestrians, bicyclists, and anything else that humans throw at it [7]?
Just add a new rule to the car's software to circumvent a "corner case." Keep doing this for each corner case.
- Self-driving cars are already being deployed in the streets in multiple states in the USA (and I think the public has the right to be well informed about what is really going on).
- I have yet to come across a paper written by a self-driving-car advocate who addresses the philosophical issues raised by Dreyfus. If you, the reader, think that Dreyfus's arguments are easily dealt with (in connection with safety-critical software, not Internet apps), then please consider writing a paper on this matter or reply to this blog post.
6 Comments
machine learning versus explicit lists of rules
Submitted by kdegrave on
It is not always necessary to add a new rule in the on board software. Perception in AVs is handled for a large part by machine learning methods. A new example can be added to the training set without making the model larger. If the training dataset grows by an order of magnitude, it is a good idea to also increase the size of the model a bit to benefit from a more favorable point in the bias-variance trade-off, but usually this expansion is quite sublinear.
Machine learning models (usually) already are a form of the 'compressed representation' that you want.
(Rule learning is also a form of machine learning and indeed results in lists of rules, which may become longer when you have more examples, but again they won't necessarily grow linearly. I don't know if anyone uses old-fashioned rule learning for AVs.)
Wittgenstein's argument in context
Submitted by kena42 (not verified) on
+1 on the previous commenter.
Our general desire to make AI more aware of the world as a whole instead of a set of rules notwithstanding, let's remember that adding rules to the training set for every new situation is more or less what happened with evolution. Our own human current ability to reason and avoid obstacles is the result of millions of years of adding new rules to the training set and see the neural network evolve as a result.
With computerized models we have an advantage over evolution in that we can add new rules and evolve the model much faster than what biology would allow us.
Tesla Crash and Deep learning
Submitted by egdaylight on
Here's a recent newspaper article about `deep learning.' I think the journalist tried to give a balanced story, contrary to what the title seems to suggest.
http://www.nytimes.com/2016/09/20/science/computer-vision-tesla-driverle...
corner cases remain problematic, also with deep learning
Submitted by egdaylight on
Deep learning has similar problems to those encountered with GOFAI:
"Or take a fluid scene like a dinner party. A person carrying a platter will serve food. A woman raising a fork will stab the lettuce on her plate and put it in her mouth. A water glass teetering on the edge of the table is about to fall, spilling its contents. Predicting what happens next and understanding the physics of everyday life are inherent in human visual intelligence, but beyond the reach of current deep learning technology."
--- cited from the aforementioned newspaper article (see the previous comment in this thread)
These scenarios are corner cases which --- following Wittgenstein --- differentiate the behavior of humans from the behavior of engineered systems.
Common Sense on Self-Driving Cars
Submitted by egdaylight on
Please read A. Marshall's After Peak Hype, Self-Driving Cars Enter the Trough of Disillusionment.
waking up
Submitted by egdaylight on
These kinds of articles on self-driving cars were very hard to find just 18 months ago:
+ from The Guardian
+ from The Conversation