Getting Software Right

Dated: 

5 August 2016

I have claimed in my previous post that, loosely speaking, it is impossible to get software right before deploying it. That statement can now be refined by presenting the following two observations:

  1. Many, if not most, errors in software engineering occur when bridging the gap between the informal, real world and the formal world of mathematical specifications. This is what Edsger Dijkstra misleadingly called the “Pleasantness problem” — a topic that I have touched in almost all my oral histories and in one of my research papers on formal methods [1]. Donald MacKenzie addresses the topic at length in his book Mechanizing Proof [2].
  2. Assume, for argument's sake, that professional software engineers do have a clear-cut specification of what their software is supposed to do. Even then, most professionals will not guarantee — because they are not able to guarantee — that their deployed software/hardware system is absolutely “correct.” Timothy Colburn discusses this matter in his book Philosophy and Computer Science [3].

More can be said about what “correct” entails. And, the distinction between a digital system and a mathematical model of the system needs further clarification. But I will not delve into those matters here.

Accepting my arguments about the impossibility of getting software to be right does not imply that you should not take an airplane or drive your current car (which does, after all, contain a lot of software too). There is a big difference between the software present in most airplanes today and the software that will be deployed in connected self-driving cars.

One of the main problems, in my opinion, is that a network of self-driving cars opens the door for small security vulnerabilities leading to global disaster. It is extremely easy to hack into, say, my local hospital and I don't see why self-driving cars will be more secure in this respect. Given the current state of the art, advocates of self-driving cars will either have to:

  • Abandon the idea of using the Internet — or any similar general-purpose device of communication — and resort to simpler, more direct, forms of communication.
  • Wait for some genius to come along who solves the “security problem” which I described in my previous post.

The technology to build full-fledged, yet safe, self-driving cars is simply not present today. Are there any good reasons to believe it will be available tomorrow? I take the term “self-driving car” here to mean the extreme, i.e., as it is presented in many (but not all) media today. Specifically, a self-driving car can drive on an arbitrary road — not on a pre-defined route, nor on a railroad — without having a human in the driver's seat.

REFERENCES

[1] Edgar G. Daylight, Sandeep K. Shukla. On the Difficulties of Concurrent-System Design, Illustrated with a 2×2 Switch Case Study. FM 2009: 273-288.

[2] Donald MacKenzie. Mechanizing Proof: Computing, Risk, and Trust. MIT Press, 2004.

[3] Timothy Colburn. Philosophy and Computer Science. M.E. Sharpe, 2000.

Tags: 

3 Comments

networking self-driving cars

Certainly self-driving cars should not depend on any network for their basic operation, simply because no network has sufficient reliability. But lots of people in the field seem to think highly frequent information exchange is desirable and would improve safety ("I'm approaching this low visibility corner.") and efficiency ("This road is blocked, reroute." / "The upcoming light will turn orange in 3.47 seconds, don't spend power to maintain speed since we're not going to make it anyway."). Vehicle to vehicle or infrastructure communication (V2V / V2I) are fields of their own. Not to mention the desire to keep the cost of software updates low, which is not compatible with isolation from all networks.

So it looks like autonomous vehicles are going to be connected, whether we like it or not. And since IT security is costly and is not a consumer visible property, economic theory says it won't be much better than your average PC or smartphone unless a regulator demands it, and knows how to. But at this point in the development, too restrictive regulation means that your country could miss the bandwagon of tests, first deployments, and perhaps manufacturing too.

(I work on the AI of a self-driving vehicle.)

turning radius?

I certainly don't want to be anywhere near that monster when it has to negotiate a bend in the road.