`Plato and the Nerd,' Part 2b: Computer Science & Descriptions vs. Software Engineering & Prescriptions

Dated: 

23 September 2017

 

One of Edward A. Lee's main topics in his book, Plato and the Nerd: The Creative Partnership of Humans and Technology (MIT Press, 2017), is the contradistinction between science and descriptions on the one hand and engineering and prescriptions on the other hand. Reading between the lines, Lee is addressing the big question: Is computer science really a science?

Based on the writings of Donald MacKenzie [13], Timothy Colburn [2], and Matti Tedre [21], I have become skeptical about computer scientists calling themselves scientists, a topic also raised in discussions with Peter Naur [3] and Donald Knuth [10,11]. (An analysis of these writings lies outside the scope of the present column.) My first impression from reading Lee's book is that he, too, is skeptical. For example, he quotes John Searle from 1984 as follows:

“... anything that calls itself “science” probably isn't ...” [12, p.20]

My second impression about Lee's thoughts are more nuanced and based on `the Kopetz Principle,' which I introduced in my first post pertaining to Lee's book. (And by the way here's a link to my second post on Lee's book.) Recall specifically the following explanation about the Kopetz Principle:

We can make definitive statements about models, from which we can infer properties of system realizations. The validity of this inference depends on model fidelity, which is always approximate.
— citing from Lee's 2012 talk, entitled: `Verifying Real-Time Software is Not Reasonable (Today)'

These 2012 words beg the question: How to get a good model fidelity? Lee addresses this question with several insightful examples in his 2017 book. My interest lies in Lee's distinction between two mechanisms on how to use models: the Scientific Mechanism and the Engineering Mechanism [12, p.42].

 

TWO MECHANISMS

I shall now elaborate on Lee's two mechanisms. The bold-faced names are of my choosing and so are several examples.

  • The Scientific Mechanism amounts to choosing a model that is faithful to the modellee. A scientist will study nature and subsequently choose or invent a model that s/he deems is useful. For example, the laws of classical mechanics (= model) are an attempt to capture and predict motions in our physical world (= modellee). The world is given and the model is of the scientist's making.
  • The Engineering Mechanism amounts to choosing or producing a modellee that is faithful to the model. For example: a construction engineer uses a blueprint (= model) as a guide to construct a building (= modellee). Another example: a software engineer uses a flowchart (= model) to produce a computer program (= modellee), as explained in my post on flowcharts.

Lee stresses that both scientists and engineers assume that the “target” (= modellee) is “operating within some regime of applicability of the model” [12, p.42]. Now the big question:

Does a computer scientist work (mostly) in compliance with the Scientific Mechanism or (mostly) with the Engineering Mechanism?

In other words: Is computer science really a science? If so, then a computer scientist, just like other scientists, “constructs models to help understand the target,” i.e., to understand the modellee [12, p.45]. If not, then perhaps a computer scientist is more like an engineer in that s/he

constructs targets to emulate the properties of a model. An engineer uses or invents models for things that do not exist and then tries to construct physical objects (targets) for which the models are reasonably faithful. [12, p.45, original emphasis]

For simplicity, I focus on first-generation computer scientists for now; that is, on researchers in the 1950s and early 1960s for whom the “target” or the “modellee” was the stored-program computer. Indeed, by the late 1940s the stored-program computer existed in nature — if the term “nature” is allowed to be construed broadly; that is, to include engineered artifacts. The stored-program computer had become a “given” for Alan Perlis, Saul Gorn, John Carr, and many other researchers whom, by the mid-1960s, would be called “computer scientists.” Again, many of these soon-to-be-called computer scientists were looking for adequate mathematical models for the few stored-program computers that were available in the 1950s and 1960s. (See my article on Turing's legacy [5] and the references therein.)

With regard to theoretical computer scientists, and complexity theorists in particular, a similar story is told by Michael Mahoney [14] — where, for example, the linear bounded automaton was (at some point and by some actors) considered less baroque than the Turing machine in order to mathematically capture physical computations. So, also theoretical computer scientists sought mathematical models in compliance with the Scientific Mechanism. I refer to my second oral history with Knuth [11] and also to my latest book [6] for more coverage.

In sum, then, I think it is quite right after all to call somebody like Saul Gorn a `computer scientist of the first hour.' The only small caveat is that the word “science,” as Lee explains,

refers to the study of nature, not to the study or creation of artificial artifacts. Under this interpretation, many of the disciplines that call themselves a “science,” including computer science, are not, even if they do experiments and use the scientific method. [12, p.21]

Therefore, and as already indicated above, I wish to broaden the notion of “nature,” so that it encompasses engineered artifacts as well. Alternatively, we could place “computer science” in a category other than “science” as long as it remains distinct from “software engineering.”

Of course many historical actors were and many contemporary actors are both scientists and engineers — a point that also Lee makes in his book. In my opinion many computer scientists are more software engineers than they would be willing to admit. This brings me to yet another theme in Lee's book, which I only mention in passing: traditionally, engineering is less respected than science. (And several engineers — as Lee illustrates — have advanced science.) Related remarks about the tendency to place mathematical rigor above all else have been made by Walter Vincenti [22], Peter Naur [17], Michael Jackson [9], and others.

 

DESCRIPTIONS vs. PRESCRIPTIONS

Saul Gorn and fellow computer scientists of the first hour were appropriating and re-casting Turing's 1936 `automatic machines,' which led to Martin Davis's 1958 concept of a `Turing machine' [1]. In a nutshell: Gorn et al. were looking for a good mathematical description of existing computing machinery. Again, I try to convey Lee's two mechanisms in my own words:

  • The Scientific Mechanism amounts to choosing a good description. A description is an account of some event. An example of a description is (a textual representation of) the second law of thermodynamics.
  • The Engineering Mechanism amounts to abiding by a prescription. A prescription is a recommendation that is authoritatively put forward. An example of a prescription is a blueprint (printed on paper) of a building.

As explained in my article on Turing's legacy, Gorn was lecturing about Turing machines all over the place by the early 1960s. In that period of his life he was first and foremost a computer scientist, not an engineer. Observations such as these help me now to understand:

  • Why many computer science courses contain the words “Turing machine” and “Chomsky hierarchy” and the like.
  • Why many software engineering courses hardly if at all rely on the “Turing machine” concept.

It simply does not make much sense to use a Turing machine as a prescriptive device in the world of software engineering (although there are exceptions).

I was initially trained as a software engineer (1995-2000) and as a student I only came across the words “Turing machine” in an appendix of a book on code theory(!) Times have changed for software engineering students at my local university (KU Leuven) for now they follow a course on computability theory. Of course curricula are different across countries and continents; nevertheless, I don't think my personal experience is too far off from that of many fellow software engineers of my generation or before. Take Dave Parnas for example. He told me that he did learn a lot about Turing machines but primarily (if not solely) from Alan Perlis, a computer scientist of the first hour (not to mention that Perlis was the very first Turing Award winner). Moreover, my educated guess is that Parnas does not consider Turing machines to be central to his profession [18, p.26-28].

But now irony sets in because I will try to show that, in accordance with the two above-stated mechanisms, Parnas was more of a computer scientist than Edsger Dijkstra, and Dijkstra was more of a software engineer than Parnas. To convey this message I will first zoom in on a fundamental problem in computing.

In the rest of this column I shall use the words “computer programmer” as an umbrella term. Specifically:

  • Both computer scientists and software engineers are “computer programmers.”
  • A “computer programmer” need not be a computer scientist nor a software engineer.

 

DIFFERENT VIEWS TO A FUNDAMENTAL PROBLEM

Regardless of whether you are more, or you think you are more, a computer scientist than a software engineer or the other way around, computer programmers across the globe are faced with an overarching difficulty: according to Parnas in 1985, we “will never find a process that allows us to design software in a perfectly rational way” [19, p.251]. A similar concern was raised by Peter Naur in the same year [15], reprinted in his 1992 book:

[T]he notion of the programmer as an easily replaceable component in the program production activity has to be abandoned. [16, p.47-48]

To put it bluntly, both Parnas and Naur opposed the over-rational research agendas of Dijkstra, Tony Hoare, and other formal methodists. In Parnas's words:

Even the small program developments shown in textbooks and papers are unreal. They have been revised and polished until the author has shown us what he wishes he had done, not what actually did happen. [19, p.252]

Similar criticism has been conveyed by Niklaus Wirth [4, Ch.5] and Michael Jackson [9].

Clearly, Parnas and Naur had much in common in 1985. Both men viewed a computer program as a non-mathematical model of the real world. Moreover, for Naur the entire development and maintenance process of software is not fully describable, not even in principle [3]. In Naur's 1985 words:

[T]he program text and its documentation has proved insufficient as a carrier of some of the most important design ideas. [16, p.39]

Naur says that if a software company loses all its programmers, then it better rebuilds its software from scratch (with a new team of programmers). All existing code and documentation will be of little value to other experts [3].

I take this strong viewpoint of Naur to align well with Lee's perspective: “knowing” a function — such as a development and maintenance process — does not imply that you can “describe that function in some mathematical or natural language” [12, p.177]; i.e. that you can rigorously document all of your work. The main reasons for Lee to come to this conclusion are, unlike Naur, based on Information Theory and Kolmogorov-Chaitin Complexity Theory. (Lee's expositions about Shannon's theoretical results are an absolute delight to read.) The argument from Complexity Theory, provided in more technical terms on page 158, amounts to the following observation (in Lee's own words):

Given any written mathematical or natural language, the vast majority of functions with numerical input and output are not describable in that language. This is because every description in any such language is a sequence of characters from a finite alphabet of characters, so the total number of such descriptions is countable. There are vastly more functions, so there can't possibly be descriptions for all of them in any one language. In fact, any language will only be able to describe a tiny subset of them, a countable infinite subset. Does a function need to be describable to be useful? [12, p.177]

See my previous post for some useful functions that are not necessarily describable.

Naur and Lee have approached the same topic from different angles and both strongly oppose a rational metaphysical view to our world. The same can be said about Lucy Suchman, who works in yet another field: Human Computer Interaction. I view Suchman as someone whom in 1987 also tackled the aforementioned overarching problem in computer programming but from a social-studies perspective. (I quote her below, not from her original 1987 book but from the 2007 edition [20].) With regard to instruction following between people, Suchman wrote:

Social studies of the production and use of instructions have concentrated on the irremediable incompleteness of instructions (Garfinkel 1967, Chapter 1) and the nature of the work required to “carry them out.” [20, p.112, my emphasis]

The emphasized words, attributed to Harold Garfinkel [8, Ch.1], are akin to the views put forth by Naur and Lee. (Am I exaggerating if I interpret the previous passage as an alternative formulation of the Kopetz Principle, which — once again — states that the model fidelity is always approximate and, thus, that the model is irremediably incomplete?) Concerning the term “instructions” in the previous quote, think about the instructions that somebody gives you to go from point A to point B. Alternatively, and more in connection with Parnas's and Naur's writings, think of the instructions (e.g. software requirements, formalized specifications, program text, sequence diagrams, ...) that you, as a novice programmer in a software company, need to examine in order to (allegedly) be able to grasp all the technicalities of the software project at hand. As we shall see later, Parnas said in 1985 that achieving such insight from documentation alone is feasible in principle. Naur, in contrast, said that human involvement remains key; that is, good documentation and code only get you so far.

Coming back to Suchman. Once again, she wrote about instruction following between people (and between people and machines). Both a prescriptive account (of an instruction that shall be carried out) and a descriptive account (of an instruction that has been carried out) are merely approximations to the real happening. In her own words:

[S]uccessful instruction following is a matter of constructing a particular course of action that is accountable to the general description that the instruction provides. The work of constructing that course is neither exhaustively enumerated in the description, nor completely captured by a retrospective account of what was done. [20, p.112, my emphasis]

The emphasized words support Lee's and Naur's views, not those of Parnas. According to Parnas in 1985 it is possible to completely capture the software-design process in a retrospective account. Indeed, he insisted that there is “good news” in that we “can fake” the actual, irrational design process. Specifically,

We can present our system to others as if we had been rational designers and it pays to pretend [to] do so during development and maintenance. [19, p.251, my emphasis]

Many of us, myself included, do try to work like this in the software industry. And before coming across the writings of Naur in 2010 I used to really believe Parnas's claim that:

... the final documentation will be rational and accurate. [19, p.256]

We can “fake” the process, Parnas wrote, by

producing the documents that we would have produced if we had done things the ideal way. [19, p.256, my emphasis]

Just like Suchman, Naur would insist that there is no “ideal way,” that we cannot be “accurate,” especially when the human is taken out of the loop to begin with. Moreover, human beings do not, according to Naur, perform like machines:

[M]uch current discussion of programming seems to assume that programming is similar to industrial production, the programmer being regarded as a component of that production, a component that has to be controlled by rules of procedure and which can be replaced easily. Another related view is that human beings perform best if they act like machines, by following rules, ... [16, p.47]

In sum, both Parnas and Naur criticized the programming methodology work of Dijkstra et al., but Naur went a few steps further.

 

AN OVERVIEW

A synthesis is starting to emerge as I write these posts on Lee's 2017 book and the history of formal methods in computer science. Observe, for instance, that Parnas's “faked” approach amounts to choosing a good description of the prior, irrational software-design process, conducted by humans. Parnas thus advocated the Scientific Mechanism, believing that it is possible to obtain a rational and complete description of a real-life software-engineering event. Dijkstra with his prescribed top-down programming methodology, in contrast, followed the Engineering Mechanism more than Parnas. The irony is that Parnas will be remembered as a software engineer pur sang, and Dijkstra mostly as a computer scientist (instead of the other way around). With that said, and as I've discussed before, Dijkstra and Hoare did call themselves “software engineers” in the 1960s and 1970s — an observation that is now starting to make more sense (to me).

All in all, some of my thoughts, particularly about the second half of the 1980s, can now be summarized:

  1. Parnas said that the we can get an accurate description of the software-design process (cf. the Scientific Mechanism), not a perfect prescription.
  2. Naur said that we can't even get an accurate description. I take both Lee's and Suchman's philosophies to align quite well with Naur's views. 
  3. Dijkstra and Hoare, in turn, were much more receptive to the idea that we can get a rational and complete prescription; that is, a recommendation that is authoritatively put forward on how to rationally develop “correct-by-construction” software (cf. the Engineering Mechanism).

To conclude, I guess it comes as no surprise when I now explicitly express my preference for the viewpoints of Naur, Lee, and Suchman. I used to be a strong advocate of formal methods. See e.g. my work on `correct-by-construction' C code for embedded systems [7]. But today I am more receptive to the idea that learning to program, or grasping the code of a colleague, is like learning to play a new instrument. Software is like music. In Naur's words:

This problem of education of new programmers ... is quite similar to that of the educational problem of other activities where the knowledge of how to do certain things dominates over the knowledge that certain things are the case, such as writing and playing a music instrument. [16, p.44]

If the analogy between software and music is warranted, then teaching students how to program is similar to teaching how to play a musical instrument. This observation has ramifications for curricula of computer science and software engineering alike. In Naur's words:

To what extent this can be taught at all must remain an open question. The most hopeful approach would be to have the student work on concrete problems under guidance, in an active and constructive environment. [16, p.48]

There is of course much more to be said. Lee, Naur, and Suchman are only three of many proponents who strive for a creative partnership of humans and technology without equating one with the other.

 

REFERENCES

[1] M. Bullynck, E.G. Daylight, and L. De Mol. Why did computer science make a hero out of Turing? Communications of the ACM, 58(3):37-39, March 2015.
[2] T.R. Colburn. Philosophy and Computer Science. M.E. Sharpe, 2000.
[3] E.G. Daylight. Pluralism in Software Engineering: Turing Award Winner Peter Naur Explains. Lonely Scholar, 2011.
[4] E.G. Daylight. The Dawn of Software Engineering: from Turing to Dijkstra. Lonely Scholar, 2012.
[5] E.G. Daylight. Towards a Historical Notion of `Turing the Father of Computer Science'. History and Philosophy of Logic, 36(3):205-228, 2015.
[6] E.G. Daylight. Turing Tales. Lonely Scholar, 2016.
[7] E.G. Daylight, A. Vandecappelle, and F. Catthoor. The formalism underlying EASYMAP: a precompiler for refinement-based exploration of hierarchical data organizations. Science of Computer Programming, 72(3):71-135, August 2008.
[8] H. Garfinkel. Studies in ethnomethodology. Englewood Clifs, NJ: Prentice Hall, 1967.
[9] M.A. Jackson and E.G. Daylight. Formalism and Intuition in Software Development. Lonely Scholar, August 2015.
[10] D.E. Knuth and E.G. Daylight. The Essential Knuth. Lonely Scholar, 2013.
[11] D.E. Knuth and E.G. Daylight. Algorithmic Barriers Falling: P=NP? Lonely Scholar, 2014.
[12] E.A. Lee. Plato and the Nerd: The Creative Partnership of Humans and Technology. MIT Press, 2017.
[13] D. MacKenzie. Mechanizing Proof: Computing, Risk, and Trust. MIT Press, 2004.
[14] M.S. Mahoney. Histories of Computing. Harvard University Press, 2011.
[15] P. Naur. Programming as theory building. Microprocessing and Microprogramming, 15:253-261, 1985.
[16] P. Naur. Computing: A Human Activity. ACM Press / Addison-Wesley, 1992.
[17] P. Naur. Knowing and the Mystique of Logic and Rules. Kluwer Academic Publishers, 1995.
[18] D.L. Parnas. Software engineering programs are not computer science programs. IEEE Software, 16(6):19-30, 1999.
[19] D.L. Parnas and P.C. Clements. A rational design process: how and why to fake it. In H. Ehrig, C. Floyd, and M. Nivat, editors, TAPSOFT Joint Conf. on Theory and Practice of Software Development, Berlin, March 1985. Springer-Verlag.
[20] L.A. Suchman. Human-Machine Recongurations: Plans and Situated Actions. Cambridge University Press, second edition, 2007.
[21] M. Tedre. The Science of Computing: Shaping a Discipline. Taylor and Francis, 2014.
[22] W.G. Vincenti. What Engineers Know and How They Know It: Analytical Studies from Aeronautical History. Johns Hopkins University Press, 1990.

Tags: