`Plato and the Nerd,' Part 2a: Penrose et al.

Dated: 

13 September 2017

During the past seven years there have been only a handful of popularized books on engineering & science that I have read from cover to cover in less than a week. These books are listed here:

  1. Donald MacKenzie's Mechanizing Proof: Computing, Risk, and Trust [9]
  2. Peter Naur's Knowing and the Mystique of Logic and Rules [10]
  3. Timothy Colburn's Philosophy and Computer Science [2]
  4. Matti Tedre's The Science of Computing: Shaping a Discipline [13]
  5. Gilles Dowek's Computation, Proof, Machine: Mathematics Enters a New Age [4]

A sixth and brand new item on my list of favorites is Edward A. Lee's Plato and the Nerd: The Creative Partnership of Humans and Technology (MIT Press, 2017), which I have previously introduced here.

Besides the aforementioned books, I also recall being inspired in the early 2000s by the writings of Douglas Hofstadter [7], James Gleick [6], Stephen Wolfram [15], and Roger Penrose [11]. (Obviously I needed more than a week to get through Wolfram's 1000+ pages.) I remember discussing Penrose's book in 2008 at the Institute of Logic, Language, and Computation of the University of Amsterdam. Surrounded by logicians, I was told that Penrose was considered a “crackpot” due to his allegedly ill-informed and opposing views on digital physics. Yet it was partly due to Penrose's book and his intriguing interpretation of Gödel's 1931 results, that I decided to enroll in 2007 in a Master of Logic program at the aforementioned institute in the first place.

 

PENROSE, WEGNER, BAETEN, NAUR, SUCHMAN

Both Penrose and Lee oppose the widespread belief that everything is a digital computer, that the physical world is equivalent to software [8, p.169]. Unlike Penrose, Lee says that the physical world can nevertheless be fully explained using the known laws of physics; in Lee's own words:

Penrose argues that the mind is not algorithmic and must be somehow exploiting hitherto not-understood properties of quantum physics. I am making a less radical argument, in that I argue that the mind is not digital and algorithmic even if it can be fully explained using the known laws of physics. [8, p.182, Lee's emphasis]

Let me hasten to remark, especially for strong proponents of a rational metaphysical view to our digital world, that Lee does not claim in his 2017 book that he or anyone else for that matter has found a practical way to solve the Halting Problem. But, neither does Lee dismiss the prospect entirely nor should he if he wants to stay consistent with his philosophical stance. My interpretation (so far) of Lee's account is that some person, or some thing, can possibly accomplish the “impossible” in an unconventional way; i.e., without programming (in the common sense of the word) the incomputable function at hand.

More central to Lee's book is his observation that not every function needs to be describable in order to be useful in our daily lives [8, p.177]. For example, humans have no need “to measure or write down the distance” that their vehicles carry them “to get value of it” [8, p.177]. Discussing the matter in more general terms, Lee writes:

The fact that computers do not and cannot deal with continuums in no way proves that there are no machines that can deal with continuums. [8, p.174, Lee's emphasis]

A car is such a machine that deals with continuums: it can transport, say, a person from position A to position B, with both A and B denoting real numbers.

As Lee points out on page 146 in his book, Peter Wegner tried to convey a similar message two decades ago [14]. This was at a time when questioning the dominance of Turing-machine computations was much more tricky than it is today. Wegner received quite a backlash from — surprise! — theoretical computer scientists; see e.g., Lance Fortnow's 2006 blog post. In 2015 I had my students study Wegner et al.'s writings and also comment in detail on Jos Baeten's research on interactive computing, i.e., research in which Baeten explicitly distinguishes between information and computation and where he builds up a more general theory of computation, more adequately called executability theory, which is based on Baeten et al.'s formal notion of a reactive Turing machine [1], not a classical Turing machine.

Coming back to Lee's Plato and the Nerd. Again, not every function needs to be describable in order to be useful. Lee provides several examples throughout his book, along with his example of the balloon machine. “Consider a simple balloon,” Lee writes,

I can think of this balloon as a machine that outputs the circumference of a circle given its diameter. Suppose I inflate the balloon until it reaches a specific diameter d at its widest point. The balloon then exhibits a circumference at a cross-section at that widest point. The “input” to the machine is the diameter d, and the output is the circumference, which by the basic geometry of circles should come out to ∏ x d. If I inflate the machine to a diameter of one foot, it calculates ∏. [8, p.174]

With this balloon machine, we could “insist on writing the circumference down as a decimal number on paper” — and, hence, attempt to make the function (concerning real numbers) describable after all — but then we “have already forced the problem into the realm of digital computers” [8, p.177]. Just to be clear: Lee takes little or no issue with classical computability theory. His main interests simply lie elsewhere: in the complementarity between humans and machines. Like Penrose and Wegner, Lee does not treat humans as machines or machines as humans. In Lee's words: “We can do many more useful things with computers that complement human capabilities” [8, p.236, original emphasis].

Lee's perspective is also similar to that of Lucy Suchman, a renowned researcher in computer-supported cooperative work. Both scholars question cognitive science's prime premise, that people act on the basis of symbolic representations, and both scholars question the general agreement in cognitive science that “cognition is not just potentially like computation; it literally is computational” [12, p.37, original emphasis].

Lee and Suchman's views align with those of the late Naur. Peter Naur's inspiration came largely from William James's Psychology of Knowing. Naur cites James extensively in his 1995 monograph Knowing and The Mystique of Logic and Rules [10]. For example, on page 12 in Naur's monograph, James is cited as follows:

"Consciousness, then, does not appear to itself chopped up in bits. Such words as `chain' or `train' do not describe it fitly as it presents itself in the first instance. It is nothing jointed; it flows. A `river' or a `stream' are the metaphors by which it is most naturally described. In talking of it hereafter, let us call it the stream of thought, of consciousness, or of subjective life." [10, original emphasis]

These words by James, endorsed by Naur, support Lee's exposition too. Indeed, besides the examples of the car and the balloon machine, Lee also discusses the brain. It too, as Lee expounds, can be viewed as an information-processing system that is far more likely to be a machine that deals with uncountable sets than it is to be a computer which solely deals with the countable world of computer science's most favorite mathematical objects — my paraphrasing from Lee [8, p.174].

 

MY FAVORITE QUESTION

I will continue to engage with Lee and other scholars in follow-up blog posts. There are many more themes in Lee's book that I want to discuss, I have barely scratched the surface. With that said, every now and then I wish to zoom in on my favorite philosophical question:

1. What is a computer program?

I have examined this question multiple times before; see, for example, my latest book Turing Tales [3] and my post on flowcharts and the year 1973. It was only when reading Lee's final chapter that I came to more fully understand his philosophical inclination (which I will elaborate on in another post). Addressing Question 1 is a prerequisite for attempting to answer the following socially relevant question:

2. Will we (ever) be able to know, let alone convincingly demonstrate to others, that our software is bug free?

I am, and not without hesitation, using the terms “computer program” in Question 1 and “software” in Question 2 as synonyms. I did the same in my previous blog post but anticipate having to make further refinements in the near future.

While reading Lee's book, I became more convinced that the distinction I make in Question 2 between “knowing” on the one hand and “convincingly demonstrating” or teaching something on the other hand is justified. A couple of months ago I would have considered the distinction to be void. (And I realize that epistemologists of cognition, such as James Fetzer, are probably well aware of this insight. I am currently not in a position to grasp the literature of Fetzer et al. [5].)

Again, I take Lee's exposition to imply the following: a human can “know” things that she cannot explain well or not at all. Consider, for instance, a reputable programmer who “knows” that her software is bug-free, yet is unable to convincingly demonstrate her “knowledge” to others. From Lee's perspective, the brains of humans (and thus of programmers) are useful information-processing machines which do “many things that we do not know to be Turing computation” [8, p.179].

 

REFERENCES

[1] J.C.M. Baeten, B. Luttik, and P. van Tilburg. Reactive Turing machines. Information and Computation, 231:143-166, 2013.

[2] T.R. Colburn. Philosophy and Computer Science. M.E. Sharpe, 2000.

[3] E.G. Daylight. Turing Tales. Lonely Scholar, 2016.

[4] G. Dowek. Computation, Proof, Machine: Mathematics Enters a New Age. Cambridge University Press, 2015.

[5] J.H. Fetzer, editor. Epistemology and Cognition. Kluwer Academic Publishers, 1991.

[6] J. Gleick. Chaos: Making a New Science. Penguin Books, 1987.

[7] D. Hofstadter. Gödel, Escher and Bach: An Eternal Golden Braid. Basic Books, 1979.

[8] E.A. Lee. Plato and the Nerd: The Creative Partnership of Humans and Technology. MIT Press, 2017.

[9] D. MacKenzie. Mechanizing Proof: Computing, Risk, and Trust. MIT Press, 2004.

[10] P. Naur. Knowing and the Mystique of Logic and Rules. Kluwer Academic Publishers, 1995. ISBN 0-7923-3680-1.

[11] R. Penrose. The Emperor's New Mind: Concerning Computers, Minds and the Laws of Physics. Oxford University Press, 1989.

[12] L.A. Suchman. Human-Machine Reconfigurations: Plans and Situated Actions. Cambridge University Press, second edition, 2007.

[13] M. Tedre. The Science of Computing: Shaping a Discipline. Taylor and Francis, 2014.

[14] P. Wegner. Why interaction is more powerful than algorithms. Communications of the ACM, 40(5), 1997.

[15] S. Wolfram. A New Kind of Science. Wolfram Media, Inc., 2002.

 

“There is a tendency to rephrase every assertion about mind or brains in computational terms, even if it strains the vocabulary or requires the suspension of disbelief.”

— John Daugman, cited by Edward A. Lee in [8, p.181]

 

 

CLARIFICATION added on 19 September 2017

With this balloon machine, we could “insist on writing the circumference down as a decimal number on paper” — and, hence, attempt to make the function (concerning real numbers) describable after all — but then we “have already forced the problem into the realm of digital computers” [8, p.177].

-->

With this balloon machine, we could (1) “insist on writing the circumference down as a decimal number on paper” [8, p.177] or (2) writing "a finite program that, in effect, describes the infinite sequence of digits that constitute ∏" [8, p.174] — and, hence, attempt (and in Case (2) successfully attempt) to make the function describable after all — but then we “have already forced the problem into the realm of digital computers” [8, p.177].

Tags: