Hopcroft and Ullman

Dated: 

24 October 2017

Is the history of computer science solely a history of progress? I don't think so. Judge for yourself by reading the present post in which I scrutinize the famous textbooks of John E. Hopcroft and Jeffrey D. Ullman.

Quotes from 1969 and 2007

I start by comparing the following two quotes. The first quote comes from Hopcroft & Ullman, 1969:

... a Turing machine is a reasonable, although surely not perfect, model of computation, ... [8, p.135]

The second quote comes from Hopcroft & Motwani & Ullman, 2007:

... the Turing machine long has been recognized as an accurate model for what any physical computing device is capable of doing. [7, p.315]

The emphasis in each quote is mine. (Note also that both quotes contain and in my opinion, appropriately contain the word “model.”)

At a first glance, both quotes seem to contradict each other. (And then I could rest my case: the history of computer science is indeed one of confusion.) How can a Turing machine be both “not perfect” and “accurate”? But, actually, I have taken each quote out of context. The first quote belongs to an introductory chapter on complexity theory (where time and space bounds matter) while the second quote appears in an informal chapter on Turing machines (where the sole distinction of interest is one between decidability and undecidability).

Later on in that same chapter from 2007, the authors write:

... we want to convince you that a TM is exactly as powerful as a conventional computer. [7, p.337]

And this message is also conveyed in Hopcroft & Ullman, 1969, and partly as follows:

... any computation that can be performed on a modern-day digital computer can be described by means of a Turing machine. [8, p.80]

In retrospect, then, both the 1969 book and the 2007 book bring the same message. So there seems to be no problem after all. But in the following paragraphs I shall argue that the message conveyed in 1969 and again in 2007 is questionable (and that it has been scrutinized by other software scholars as well). So perhaps the history of computer science is a history of conflation after all [5]. ... Judge for yourself ...

Turing Machines and Computers

My contention is that Turing machines are mathematical objects and computers are engineered artifacts. The former can serve as mathematical models of the latter. Likewise, the latter can serve as engineered models (i.e., implementations) of the former. Hopcroft & Motwani & Ullman take a very similar position in their 2007 book. However, in their Chapter 8, they also attempt to mathematically albeit informally demonstrate that a computer can simulate a Turing machine and that a Turing machine can simulate a computer. All this in order to come to the following dubious result:

Thus, we can be confident that something not doable by a TM cannot be done by a real computer. [7, p.372]

How can a Turing machine, just like a prime number, “do something” in the way that a real computer does something? I will argue that to make sense of all this, we need to be explicit about our modeling activities. Specifically, we should distinguish between two persons:

  1. An engineer who models (i.e., implements) the Turing machine with a real computer.
  2. A scientist who mathematically models the real computer with a Turing machine.

For an elaborate distinction between an engineer and a scientist see my previous post.

Coming back to Chapter 8 in Hopcroft et al.'s 2007 book, simulating a Turing machine with a computer (cf. 1.) is easy provided that we are allowed to:

accept that there is a potentially infinite supply of a removable storage device such as a disk [7, p.372]

The authors then note that “this argument is questionable,” since the “physical resources to make disks are not infinite.” Subsequently, the authors ask their readership to be “realistic in practice” so that the aforementioned simulation carries through [7, p.372]. Fine with me, but then we are stepping away from a purely mathematical argument. That is, “simulation” acquires an engineering connotation.

Coming then to the simulation of a computer with a Turing machine (cf. 2.): to do this in polynomial time, the authors now introduce a finiteness constraint which should be contrasted with the aforementioned supposition that, practically speaking, there are an unbounded number of resources. Specifically, the authors do put “a limit on the number of bits that one computer word can hold” in order for the proof to work out [7, p.369]. Fine with me and there really is no contradiction here, so don't get me wrong but the choices made are clearly modeling choices so that the overall argument works out in the first place.

In a word, Hopcroft et al. are actually mathematically proving an equivalence between the following two very similar mathematical models:

  • a Turing machine and
  • a carefully-chosen digital abstraction (i.e., a mathematical model) of a real computer.

Hopcroft et al. are merely perceiving real computers in a specific way, i.e. in compliance with classical computability theory. They are implicitly working with a particular mathematical model of a real computer, not with a real computer itself.

The authors are thus definitely not backing up their following two claims:

  1. A computer can simulate a Turing machine.
  2. A Turing machine can simulate a computer [7, p.362].

So, if the authors want to keep the verb “to simulate,” then they should replace “computer” with “a mathematical model of a computer.” (Moreover, the mathematical modeling language has to be Turing complete.) Alternatively, if the authors prefer to keep the word “computer,” then they should replace the verb “to simulate” with the verb “to model.” In the latter case, we again get something like this:

  1. A computer can model (i.e. implement) a Turing machine.
  2. A Turing machine can mathematically model a computer.

Note that the modeling in 1. and the modeling in 2. are very different. Moreover, modeling implies idealizing: see e.g. my previous blog posts on the Kopetz Principle (here and also here and also here). In this regard, the authors incorrectly draw the following conclusion:

Thus, we can be confident that something not doable by a TM cannot be done by a real computer. [7, p.372]

The previous statement only holds if the authors have demonstrated an isomorphism between Turing machines on the one hand and real computers on the other hand. But they haven't. The isomorphism that they are considering only holds between Turing machines and their carefully crafted models of real computers.

Moreover, is it not possible that if we look inside a real computer and refrain from mapping our observations onto our favorite mathematical objects, that the computer is, in some sense, doing something for us that Turing machines do not do? Only if we look at real computers with our traditional spectacles in which partially computable functions are the preferred objects can we equate the Turing machine with the computer in a traditional and rather weak sense.

Turing Machines and Computer Programs

There is more. Hopcroft et al. also essentially equate Turing machines and computer programs. My contention, in contrast, is to view a Turing machine as one possible mathematical model of a computer program. A finite state machine is yet another mathematical model of a computer program. Hopcroft et al. claim that the Turing machine is better than the finite state machine (see below). I, however, view neither model to be better, for it all depends on the engineering task at hand.

Hopcroft et al. clearly view computer programs as Turing machines:

[I]t is undecidable whether a program prints hello, world, whether it halts, whether it calls a particular function, rings the console bell, ... [7, p.413]

However, every now and then Hopcroft et al. seem to be aware that computer programs are not Turing machines after all:

Programs are sufficiently like Turing machines that the [above] observations [...] are unsurprising. [7, p.413]

Furthermore, Hopcroft et al. are aware that computer programs can also be perceived as finite state machines (or finite automata), but they consider a finite state machine to be a poor choice. In their own words:

[T]reating computers as finite automata [...] is unproductive. [7, p.322]

(It is not always unproductive, it all depends on the engineering task at hand.) The authors stick to the Turing machine model and motivate their choice by explaining that computer memory can always be extended in practice:

If we run out of memory, the program can print a request for a human to dismount its disk, store it, and replace it by an empty disk. [7, p.322]

So, to make the undecidability proof work, the authors have decided to model a composite system: a real computer with a human being. No-nonsense engineers, by contrast, will prefer to use a weaker model of computation and stick to the system at hand: a real computer and nothing more. Hopcroft et al. also do not discuss alternative models of computation. Based on their motivations not to use finite state machines, I would opt for a linear bounded automaton and not a Turing machine. But, of course, if I do that then the much-wanted undecidability result does not hold (for linear bounded automata have a decidable halting problem). That, in short, explains why mainstream computer scientists heavily defend the Turing machine as the one and only viable model of computation in an average computability course. To get a more coherent view on what is going on, and how to fix it, I gladly refer to my latest book Turing Tales [5].

In sum, critical readers who resist indoctrination become amused when reading Hopcroft et al.'s Chapter 8 (`Introduction to Turing Machines'), not to mention the peer pressure provided via Lance Fortnow's blog post in reply to another critical software scholar: Peter Wegner.

A much better dissemination strategy, I believe, is to remain solely in the mathematical realm of Turing machines or other yet equivalent mathematical objects when explaining undecidability to students, as exemplified by the textbooks of Martin Davis [3, 4]. A separate concern, then, is to discuss and debate how that mathematical impossibility result could by means of a Turing complete model of computation have bearing on the engineered artifacts that are being modeled. The writings of Robert Floyd [6], Benjamin Pierce [10], and Joe Wells [16], just to give three names, show that undecidability most definitely has a practical role to play when used properly. But I have yet to come across a textbook on computability theory where “modeling” is the most prominent name of the game.

References

A lot of the above remains controversial in mainstream computer science. I recommend consulting the many references provided in my book [5] and also the related but not necessarily similar writings of Peter Wegner [13, 14, 15], Carol Cleland [1, 2], Oron Shagrir [11, 12], and Edward A. Lee [9] in order to get the bigger picture.

 

[1] C.E. Cleland. Is the Church-Turing Thesis True? Minds and Machines, 3:283-312, 1993.
[2] C.E. Cleland. Recipes, algorithms, and programs. Minds and Machines, 11:219-237, 2001.
[3] M. Davis. Computability and Unsolvability. McGraw-Hill, New York, USA, 1958.
[4] M. Davis, R. Sigal, and E.J. Weyuker. Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science. Morgan Kaufmann, second edition, 1994.
[5] E.G. Daylight. Turing Tales. Lonely Scholar, 2016.
[6] R.W. Floyd. On the nonexistence of a phrase structure grammar for ALGOL 60. Communications of the ACM, 5:483-484, 1962.
[7] J.E. Hopcroft, R. Motwani, and J.D. Ullman. Introduction to Automata Theory, Languages, and Computation. Addison Wesley / Pearson Education, 2007.
[8] J.E. Hopcroft and J.D. Ullman. Formal Languages and their Relation to Automata. Addison-Wesley, 1969.
[9] E.A. Lee. Plato and the Nerd: The Creative Partnership of Humans and Technology. MIT Press, 2017.
[10] B.C. Pierce. Bounded quantification is undecidable. In Conference Record of the Nineteenth Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pages 305-315. ACM Press, 1992.
[11] O. Shagrir. Effective computation by humans and machines. Minds and Machines, 12:221-240, 2002.
[12] O. Shagrir and I. Pitowsky. Physical Hypercomputation and the Church-Turing Thesis. Minds and Machines, 13:87-101, 2003.
[13] P. Wegner. Why interaction is more powerful than algorithms. Communications of the ACM, 40(5), 1997.
[14] P. Wegner and D.Q. Goldin. Computation beyond Turing machines. Communications of the ACM, 46(4):100-102, 2003.
[15] P. Wegner and D.Q. Goldin. Principles of problem solving. Communications of the ACM, 49(7):27-29, 2006.
[16] J.B. Wells. Typability and type checking in System F are equivalent and undecidable. Annals of Pure and Applied Logic, 98(1-3):111-156, 1999.

Tags: