Cars and Automatic Programming

Dated: 

19 February 2018

What makes race car pilots better drivers than you and me? Apparently their reaction time is not shorter than that of average drivers. Instead, it is their ability to better anticipate the behavior of other chauffeurs, bicyclists and pedestrians which makes them stand apart [1].

As is well known, computers outperform humans in repetitive tasks, such as those required for spell checking and sorting numbers. A human sorting a very long list of numbers can easily become distracted and make errors. Compared to race car drivers, computers are poor in handling dynamic events such as those encountered in traffic, unless, of course, all events that can happen in the real world have been explicitly modeled and programmed into the car's computer.

Perhaps these observations can help clarify, at least in part, why computers have not completely replaced human drivers so far. Indeed, according to what I take to be Peter Naur and Michael Jackson's central view on technology, we are not able to capture all relevant aspects of the real world — i.e., the world of traffic — in logical rules. Nevertheless, several people at least have the desire to substitute computer control for human labor as much as possible.

The idea of replacing a human by a computer has a rich past and overarches several domains, including “automatic programming.” In some of my recent writings I have delved into the topic of “automatic programming,” as it was perceived and researched by John Carr, Edsger Dijkstra, Peter Naur and others in the 1950s and 1960s. For example, Carr's dream was to achieve ‘Completely Automatic Programming.’ As I have reported:

This meant that, once a proper algorithm was developed, it should be possible for the machine to decide on its own machine code. The machine of the future ‘should be allowed to engineer its own construction’, in contrast to present ‘fixed code machines’ that are ‘not self-encoding.’ [2, p.217]

Striving for John Carr's “machine of the future” led to many technological advancements in the field of programming and what we today call “computer science.” But it has not led to a world in which humans are completely left out of the loop in each and every respect. In my opinion, airplanes fly safer when both humans and computers are in the control loop and I think the same can be said about self-driving cars. While an airplane pilot has seconds and in some cases even minutes to react to the on-board computer's emergency alert, a human driver has to respond immediately in order to avoid a collision between his car and another moving object on the ground. In retrospect, then, the human driver has to remain concentrated on the road and it thus seems unlikely that he will be able to read his newspaper while driving through a busy city (unless he does not mind becoming injured in the process).

Certain flavors of semi-autonomous driving seem realistic but having a car drive without a human in the driver's seat means that the car's computer and its software have made human control completely superfluous. Stating that this is possible amounts to — or at least seems to amount to — believing that computers are also better than humans in anticipating uncorrelated events from the real world. For example, can a self-driving car learn to drive in, say, the cities of Mumbai and New York, just like a young and inexperienced human driver can? Mumbai and New York have very different cultures, including “traffic cultures,” and being a good driver in one city does not necessarily make you a decent driver, let alone a good driver, in the other city. Specifically, can a self-driving car learn to anticipate the behavior of manually controlled taxis in Mumbai, not to mention bicyclists and pedestrians? While the average taxi driver in Mumbai has “excellent” driving skills, he is also somebody who breaks several traffic laws.

My analysis is starting to sound like a 20th-century debate between contenders for “weak Artificial Intelligence” and those who maintain having faith in “strong AI.” The only difference is that I use the words “semi-autonomous car” and “self-driving car,” respectively. Perhaps the history of AI needs to be retold [3] for the benefit of the 21st-century automobile industry.

 

REFERENCES

[1] Special thanks go to my former colleague Pieter van Schaik.
[2] E.G. Daylight. Towards a Historical Notion of 'Turing — the Father of Computer Science'. Special issue of the journal History and Philosophy of Logic, Vol. 36, No. 3, 2015.
[3] Hubert L. Dreyfus. What Computers Still Can't Do: A Critique of Artificial Reason, MIT Press, 1992.

Tags: