Dijkstra's Rallying Cry for Generalization
http://dijkstrascry.com
enBelieving in humanity again
http://dijkstrascry.com/node/149
<section class="field field-name-field-histdate field-type-text field-label-inline clearfix view-mode-rss"><h2 class="field-label">Dated: </h2><div class="field-items"><div class="field-item even">2 February 2017</div></div></section><div class="field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss"><div class="field-items"><div class="field-item even" property="content:encoded"> <div id="yiv5539931513yui_3_16_0_ym19_1_1485937871590_42407">Breaking News: The Dutch government has decided to abandon "software solutions" in their forthcoming elections: <a href="http://www.standaard.be/cnt/dmf20170201_02707485" target="_blank" class="yiv5539931513" id="yiv5539931513yui_3_16_0_ym19_1_1485937871590_41588" rel="nofollow">http://www.standaard.be/cnt/dmf20170201_02707485</a><strong> That's one small step back for machines, one giant leap for mankind.</strong></div>
<div><strong><br /></strong></div>
<div style="text-align: justify;" dir="ltr">Similar advancements are being made in connection with self-driving cars: S<span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN">afety engineers are casting doubts about the viability of having self-driving cars roam our cities. Security experts — in computer science's flagship journal, Communications of the ACM (February 2017) — even contemplate the possibility that the Internet of Things (and especially: <em>Moving </em>Things) will become a failed technology. Are self-driving cars the zeppelins of the 21st century?</span></div>
<div style="text-align: justify;" dir="ltr"><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN"><br /></span></div>
<div style="text-align: justify;" dir="ltr"><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN">Regardless of whether these "technology pessimists" are right or wrong (or a bit of both), voices of concern are being raised and are starting to be heard by fellow engineers. This progress comes despite the large investments that have already been made to deploy fully-autonomous vehicles as soon as possible (if not already), so that our children allegedly will not have to learn how to drive.</span></div>
<div dir="ltr">
<p class="MsoNormal" style="text-align: justify;">.</p>
<p class="MsoNormal" style="text-align: justify;"><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN">That being said, as a historian of technology and a concerned safety engineer myself, I still foresee the following publication in a few decades from now:</span></p>
<blockquote><p class="MsoNormal" style="text-align: justify;"><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN">"During the early years of self-driving cars, progress created among many the strong feeling that a working system was just around the corner. The illusion was created by the fact that a large number of problems were rather readily solved. It was not sufficiently realized that the problems solved were just the simplest ones whereas the few remaining problems were the harder ones, very hard indeed."</span></p>
</blockquote>
<p class="MsoNormal" style="text-align: justify;"><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN">That is my slight modification (already presented <a href="/WittgensteinCarsTuring">here</a>) of Yehoshua Bar-Hillel's 1960 report `The Present Status of Automatic Translation of Languages' in which he retrospectively scrutinized the entire field of machine translation. In a forthcoming talk (to be announced in a couple of weeks) I will provide arguments why the next chapter in the History of Failed Technologies will be one about the Internet of Moving Things and, specifically, network-connected cars that are designed to roam <em>arbitrary</em> roads. On a positive note, I hope to convey that some less ambitious </span><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN"><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN">—</span> i.e., more incremental </span><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN"><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN">—</span> projects, such as the deployment of podcars on railroads, are far more realistic in actually solving the mobility problems that we face today. My inspiration also comes from <a href="http://www.gva.be/cnt/dmf20161124_02589588/ingenieur-peter-hellinckx-kijkt-of-podcars-in-antwerpen-kunnen">Peter Hellinckx's insightful talk</a>, which he presented a couple of months ago in Antwerp.<br /></span></p>
<p class="MsoNormal" style="text-align: justify;"><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN">A sociological reflection, in turn, amounts to asking the following question: Why do some entrepreneurs want to change the world in a drastic manner, given that small, incremental steps are more likely to succeed? Perhaps they are <em>not</em> aware of computer science's holy grail, which is to engineer systems that are both correct by construction (or completely reliable) and fully automated. In layman's terms: a seemingly insignificant software error in the digital system of a network-connected, automated car can, when detected by a malicious hacker, result in a crash of that car and in thousands of other crashes. (Having cars mechanically attached to railroads is one of many ways to at least reduce the impact of this hazard. Mechanically and drastically constraining the cars' maximum speed, is yet another.) </span></p>
<p class="MsoNormal" style="text-align: justify;"><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN">Safety engineers are paid and expected to envisage such calamities and I shall therefore refrain from refraining to do so in my talk </span><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN"><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN"><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN">— entitled: </span></span><em>Self-Driving Cars are the Zeppelins of the 21st Century: Towards Writing the Next Chapter in the History of Failed Technologies</em> <span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN"><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN"><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN"><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN">—</span></span></span></span> with the main purpose of opening up a responsible debate about our future digital world.</span><span style="mso-ascii-font-family: Calibri; mso-hansi-font-family: Calibri; mso-bidi-font-family: Calibri; mso-ansi-language: EN;" lang="EN" xml:lang="EN"> </span></p>
</div>
</div></div></div><section class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above view-mode-rss"><h2 class="field-label">Tags: </h2><ul class="field-items"><li class="field-item even" rel="dc:subject"><a href="/taxonomy/term/17" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Self-Driving Cars</a></li></ul></section>Thu, 02 Feb 2017 09:42:50 +0000egdaylight149 at http://dijkstrascry.comhttp://dijkstrascry.com/node/149#commentsMichelle Obama vs. the Gatekeeper
http://dijkstrascry.com/node/145
<section class="field field-name-field-histdate field-type-text field-label-inline clearfix view-mode-rss"><h2 class="field-label">Dated: </h2><div class="field-items"><div class="field-item even">15 October 2016</div></div></section><div class="field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss"><div class="field-items"><div class="field-item even" property="content:encoded"> <p>Category mistakes and incomputability claims seem to go hand in hand. I have given one modern example in connection with <a href="/node/140">programming languages and Michael Hicks</a> and another one directly concerning <a href="/node/143">the diagonal argument</a>. In the present blog post I shall give one more example, based on an excerpt from my rejected POPL paper entitled: `Category Mistakes in Computer Science at Large.' Specifically, I shall zoom in on the “code improvement problem,” as discussed by Albert R. Meyer and Dennis M. Ritchie in their 1967 paper `The complexity of loop programs' [33].</p>
<p> </p>
<p><strong>Terminology</strong></p>
<p>I distinguish between two abstractions:</p>
<ul><li>abstraction A_{I}^{pos}, which allows for the representation of arbitrary large positive integers, and</li>
<li>abstraction A_{R}, which allows for the representation of infinite precision real numbers.</li>
</ul><p>I use the term “programming language” as an abbreviation for “computer programming language” and always write “mathematical programming language” in full.<strong> </strong></p>
<p><strong><br /></strong></p>
<p><strong>Extract from my Rejected POPL Paper: “Flawed Incomputability Claims”</strong></p>
<p>Let us turn to what I take to be flawed incomputability claims in the literature. Meyer & Ritchie's 1967 paper, for instance, is not solely about FORTRAN programs and LOOP programs, but, more generally, about the industrial “code improvement problem.” Is it possible to automatically improve the assembly code of a corresponding FORTRAN program? — that's the central question in the introduction of their paper. Citing from their introduction:</p>
<blockquote><p>[C]onsider the problem of improving assembly code. Compilers for languages like FORTRAN and MAD typically check the code of an assembled program for obvious inefficiencies — say, two “clear and add” instructions in a row — and then produce edited programs which are shorter and faster than the original. [33, p.465, my emphasis]</p>
</blockquote>
<p>As this excerpt shows, the authors specifically referred to the [computer] programming languages FORTRAN and MAD [and not to mathematical programming languages]. So, clearly, a “program” here refers to a computer program, not to a mathematical object.</p>
<p>To get a mathematical grip on the industrial “code improvement problem,” Meyer & Ritchie resorted to the theory of computability (and the theory of primitive recursive functions in the rest of their paper). Continuing with their introduction:</p>
<blockquote><p>From the theory of computability one can conclude quite properly that no code improving algorithm can work all the time. There is always a <strong>program</strong> which can be improved in ways that no particular improvement algorithm can detect, so no such algorithm can be perfect. [33, p.465, my emphasis]</p>
</blockquote>
<p>Here we see Meyer & Ritchie refer to the undecidability of the halting problem. The implication is that a “program” now refers to a mathematical object and, specifically, to a “Turing machine” (see the second footnote in their paper for a confirmation). In other words, a “program” here does not refer to, say, a finite state machine, and it definitely does not refer to a program residing in a computer.</p>
<p>Meyer & Ritchie subsequently conveyed some common wisdom from the theory of computation. It is possible to decide the halting problem for specific subsets of Turing machines(*); that is, even though the halting problem is undecidable in general, modeling computer programs as Turing machines can still lead to practical tools. (A modern example in this context is the work of Byron Cook et al. [9], which accepts both A_{I}^{pos} and A_{R}.) In their own words, regarding their code improvement problem:</p>
<blockquote><p>But the non-existence of a perfect algorithm is not much of an obstacle in the practical problem of finding an algorithm to improve large classes of common <strong>programs</strong>. [33, p.465, my emphasis]</p>
</blockquote>
<p>Here the word “program” thus still refers to a “Turing machine.”</p>
<p>My analysis so far shows that Meyer & Ritchie conflated their computer programs and their mathematical programs. Furthermore, Meyer & Ritchie only considered models of computation that comply with fundamental abstraction A_{I}^{pos}, namely that any variable V can represent an arbitrary large positive integer. Specifically, they only considered a mathematical program as synonymous to either a Turing-machine program (i.e., a partially computable function) or a LOOP program (i.e., a primitive recursive function). But, as we have seen, there are also models of computation that do not comply with abstraction A_{I}^{pos}.</p>
<p>Based on their mathematical analysis, Meyer & Ritchie ended their paper with an impossibility claim about computing practice:</p>
<blockquote><p>If an “improvement” of the code of a <strong>program</strong> is defined as a reduction in depth of loops or number of instructions (without an increase in running time), then the proof of [the undecidability result expressed in] Theorem 6 also reveals that there can be no perfect code improvement algorithm. Thus the code improvement problem, which we noted in the introduction was undecidable for <strong>programs</strong> in general, is still undecidable for the more restricted class of Loop <strong>programs</strong>. [33, p.469, my emphasis]</p>
</blockquote>
<p>A purely mathematical — and, hence, valid — interpretation of Meyer & Ritchie's findings would amount to stating that the theoretical variant of the “code improvement problem” is not only undecidable for Turing-machine programs in general, but also for the more restricted class of LOOP programs. My disagreement has to do with the conflation made between the theoretical variant of the “code improvement problem” and the actual, industrial “code improvement problem.” It is the latter which is “noted in the introduction” of their paper in connection with FORTRAN and MAD, <em>not</em> the former.</p>
<p>As a result then — and now a specific technical contribution of my analysis follows — it is still very well possible that somebody <em>does</em> end up defining the improvement of the code of a FORTRAN program as a reduction in depth of loops or number of instructions without an increase in running time and yet still obtains a perfect code improvement algorithm for these computer programs. (The adjective “perfect” can, however, only be quantified in a well-defined mathematical setting.) Meyer & Ritchie's paper only indicates that the person who aspires doing so will have to resort to a model of computation that does not comply with abstraction A_{I}^{pos}.</p>
<p><strong>Footnote</strong> (*) I thank a reviewer for making the following observation: the Timed Automata model is another example that does not rely on finite abstractions and yet is complete and decidable.</p>
<p> </p>
<p><strong>Michelle Obama Comes to the Rescue</strong></p>
<p>That, then, was another excerpt from my rejected POPL paper. Perhaps the POPL Gatekeeper agrees with my analysis — because s/he says it “is only a recapitulation of dialogues familiar to everyone working in verification and related areas” — and perhaps s/he therefore considers my specific technical contribution to be trivial. I believe my analysis is very trivial indeed, but only in retrospect. Meyer and Ritchie's main claim is similar to that of <a href="/CategoryMistakes">Dr. X</a> who, briefly put, says that he has mathematically <em>proved</em> that certain <em>systems</em> cannot be <em>engineered</em>. And that incorrect claim, in turn, shows some resemblance to <a href="/node/140">Michael Hicks's views</a>.</p>
<p>I honestly don't know whether the POPL Gatekeeper is <em>really</em> convinced that Meyer and Ritchie have incorrectly projected their theoretical findings onto programming practice. Here's what the POPL Gatekeeper has to say in his/her own words about my rejected paper:</p>
<blockquote><p>I believe the main points of this paper are already widely agreed-upon in the POPL community and are used appropriately throughout the literature, even in the (relatively recent) papers that are quoted as evidence to the contrary. What I expect is that the author of this paper is an outsider to the POPL community and has misunderstood its conventions for writing about formalism.</p>
</blockquote>
<p>The POPL Gatekeeper has made a <a href="/node/142">similar remark about Dave Parnas</a> who had the audacity to scrutinize the writings of some honorable POPL members in the Communications of the ACM.</p>
<blockquote><p>[Continued:] The conclusions may be interesting as a data point on how outsiders may misinterpret formal-methods papers, though it's not clear the situation here is any worse than it is for the average technical field.</p>
</blockquote>
<p>Four quick points:</p>
<ol><li>Is the Gatekeeper suggesting that I first research whether “the situation” is worse in computer science than in other disciplines and that I then come back to the POPL gate if (and only if) the answer is affirmative? (My educated guess is that researchers in more mature disciplines make less category mistakes.) At least the Gatekeeper is implicitly acknowledging that “the situation” can be improved. That's a start.</li>
<li>A researcher who does consistently distinguish between the model and the technical artefact is always preferable to someone who conflates the two. Check out the <a href="/node/142">Parnas-Chaudhuri exchange</a> and judge for yourself. </li>
<li>I'm afraid I'm not the kind of outsider the Gatekeeper was expecting. I talk to formal methodists every single day. I even used to be a symbol chauvinist but at some point in my career I met people who were asking the right questions.</li>
<li>I'm sure many of my findings are obvious to the Gatekeeper but I'm also certain that s/he has not grasped all implications (<a href="/node/144">see this</a>). </li>
</ol><p>Continuing with the Gatekeeper's words:</p>
<blockquote><p>I think this paper should not be accepted, because it is only a recapitulation of dialogues familiar to everyone working in verification and related areas, and yet at the same time the paper fails to "do no harm," taking quotes out of context to accuse researchers of confusion.</p>
</blockquote>
<p>As Michelle Obama has said recently, “when they go low, we go high.”</p>
<blockquote><p>[Continued:] As a convincing paper should, this one presents evidence for its thesis that the credo above is *not* already in wide use, explaining how research papers are giving misleading impressions about what their results prove. The evidence is entirely in the form of historical anecdotes and quotes from a handful of papers.</p>
</blockquote>
<p>Historical anecdotes? A handful of papers? Is the Gatekeeper suggesting that I first find many more papers (which really is no difficulty at all) and that I then come back to knock on the POPL gate? What about all the <em>other</em> examples presented in the books of <a href="https://www.amazon.com/Philosophy-Computer-Science-Bureaucarcies-Administration/dp/156324991X">Timothy Colburn</a> and <a href="https://mitpress.mit.edu/books/mechanizing-proof">Donald MacKenzie</a> — books cited in my paper? Moreover, almost all articles discussed in my rejected POPL paper are authored by prominent computer scientists.</p>
<blockquote><p>[Continued:] There are conventions for how such papers are written and what is assumed about their contexts. The author of this paper seems to misunderstand the conventions, leading to sightings of fundamental confusions where none exist.</p>
</blockquote>
<p>Fortunately I belong to a second — or even a third — generation of software scholars who doesn't buy this rhetoric. I stand on the shoulders of James Fetzer, Peter Naur, Timothy Colburn and others who have already made complaints similar to those that I am making here. My small contribution here is that I am also</p>
<ol><li>discussing technical claims pertaining to computability theory,</li>
<li>following <a href="/node/139">Raymond Turner's</a> very recent publications about <em>technical artefacts</em>, and </li>
<li>using primarily POPL case studies in an attempt to reach out to the POPL community. </li>
</ol><blockquote><p>[Continued:] At most, these new observations call for considering *educational* strategies for informing a broader audience about these conventions; researchers in the field are already well aware of them and draw the correct conclusions from papers.</p>
</blockquote>
<p>So now the POPL Gatekeeper is at least admitting <em>some</em>thing. My clarifications are potentially beneficial to educators and outsiders; i.e., researchers who do not belong to the elite. I'm afraid the last sentence in the previous quote is false and, as I've explained before, I don't have any reason to believe that the POPL Gatekeeper actually understands the implications of <em>consistently</em> distinguishing between computer programs and mathematical programs. S/he has, remember, <a href="/node/142">sidestepped my observation</a> that multiple POPL members still tell me that it *is* possible to fully verify a digital *system*.</p>
<p> </p>
<p><strong>References</strong></p>
<p>[9] B. Cook, A. Podelski, and A. Rybalchenko. Proving program termination. Communications of the ACM, 54(5):88–98, May 2011.<br />[33] A. Meyer and D. Ritchie. The complexity of loop programs. In Proceedings of the ACM National Meeting, pages 465–469, 1967.</p>
</div></div></div><section class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above view-mode-rss"><h2 class="field-label">Tags: </h2><ul class="field-items"><li class="field-item even" rel="dc:subject"><a href="/category-mistakes" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Category Mistakes</a></li></ul></section>Sat, 15 Oct 2016 17:26:40 +0000egdaylight145 at http://dijkstrascry.comhttp://dijkstrascry.com/node/145#commentsMarvin Minsky and the Gatekeeper
http://dijkstrascry.com/node/144
<section class="field field-name-field-histdate field-type-text field-label-inline clearfix view-mode-rss"><h2 class="field-label">Dated: </h2><div class="field-items"><div class="field-item even">11 October 2016</div></div></section><div class="field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss"><div class="field-items"><div class="field-item even" property="content:encoded"> <p>A recurring theme in my blog posts is the appropriation of the term “computer program” by Dijkstra and other researchers. The “computer program” concept has acquired at least four different meanings during the course of history. It can refer to:</p>
<ol><li>a technological object à la Maurice Wilkes in 1950 and Dave Parnas in 2012,</li>
<li>a mathematical object of finite capacity à la Edsger Dijkstra in 1973,</li>
<li>a mathematical (Turing-machine) object of infinite size à la Christopher Strachey in 1973, and </li>
<li>a model of the real world that is not a logico-mathematical construction à la Peter Naur in 1985 and Michael Jackson today. </li>
</ol><p>With regard to Dijkstra and Strachey, see my <a href="http://www.dijkstrascry.com/infinite">blog post from March 2013</a>. For Naur's views, see <a href="http://www.lonelyscholar.com/node/7">my oral history</a> with him and, likewise, see <a href="http://www.dijkstrascry.com/MichaelJackson">my interview with Jackson</a>. (Finally, there is of course <a href="/node/139">Raymond Turner's philosophical perspective</a>: to view a “computer program” as a technical artefact, which is not quite the same as Viewpoint 1.)</p>
<p>What I find so vexing is that computer scientists did not consistently follow one interpretation in their writings. Marvin Minsky, for example, used the word “program” on page 25 in his 1967 book, <em>Computation: Finite and Infinite Machines</em> [8], to refer to data and instructions that fit in a real, finite memory of a physical computer. On page 153, by contrast, the very same word refers to a mathematical object of “unlimited storage capacity,” akin to a Turing machine. Likewise, Tony Hoare consistently used the word “computer” in 1969 to refer to a real physical device but in his 1972 paper the very same word sometimes refers to a finite, physical device and sometimes to a mathematical abstraction in which “infinite computations” can arise [4].</p>
<p>In sum, historical actors did not always explicitly distinguish between real objects and their models, let alone between all four aforementioned meanings of a “computer program.” In the words of an eminent scholar, “epistemic pluralism is a major feature of emerging fields” [1] and I have no reason to believe that computer science is any different.</p>
<p>In the next section I present an extract from my rejected POPL paper, pertaining to Minsky's reception of the halting problem. Again, it is the categorical distinction between a mathematical program and a computer program that I wish to emphasize, not to mention the fact that a computer program can be modeled with <em>multiple</em> mathematical programs. Minsky places a “computation system” and a “Turing machine” in the same category. That's fine as long as a computation system denotes a mathematical object. But, later on it becomes clear that he is talking about electronic computers.</p>
<p> </p>
<p><strong>Extract from my Rejected POPL Paper: “Marvin Minsky, 1967”</strong></p>
<p>That all being said and done, the anonymous referees [from the CACM] whom I have cited at length [in a previous part of my rejected POPL paper] are in good company for they have history on their side. In 1967 Marvin Minsky reasoned similarly in his influential book <em>Computation: Finite and Infinite Machines</em>, as the following excerpt illustrates:</p>
<blockquote><p>The unsolvability of the halting problem can be similarly demonstrated for any computation system (rather than just Turing machines) which can suitably manipulate data and interpret them as instructions.<br />In particular, it is impossible to devise a uniform procedure, or computer <strong>program</strong>, which can look at any computer <strong>program</strong> and decide whether or not that <strong>program</strong> will ever terminate. [8, my emphasis]</p>
</blockquote>
<p>How do we interpret Minsky's wordings? Does a “program” solely refer to a “Turing machine” or any other Turing-equivalent mathematical program for that matter? If so, then we could perceive the statement as [completely] abstract and correct; for then it would be a rephrasing of Martin Davis's 1958 account [3, p.70] of Alan Turing's 1936 paper. Or, as a second interpretation, does a “program” refer to a computer program that we can indeed “devise” and that may or may not “terminate”?</p>
<p>Based on a detailed study of Minsky's book, I assert that the right answer leans towards the second interpretation, as also the follow-up excerpt from his book indicates:</p>
<blockquote><p>(This observation holds only for <strong>programs</strong> in computers with essentially unlimited secondary storage, since otherwise the computer is a finite-state machine and then the halting problem is in fact solvable, at least in principle.) [8, p.153, my emphasis]</p>
</blockquote>
<p>Paraphrasing Minsky, if we view computer programs as Turing machines, then we have an impossibility result with severe practical implications. However, Minsky's reasoning is only valid if the following two assumptions hold (each of which is wrong):</p>
<ol><li> A computer program is a synonym for a mathematical program.</li>
<li>The mathematical program (mentioned in the previous sentence) has to be equivalent to a Turing machine program and not to, say, a primitive recursive function.</li>
</ol><p>In other words, Minsky did distinguish between finite and infinite objects, but not between abstract objects (Turing machines) and concrete physical objects (computers and storage), let alone between abstract objects and technical artefacts (computer programs).</p>
<p>The second assumption often goes unmentioned in the literature exactly because computer programs and mathematical programs are frequently conflated. Contrary to what Minsky wrote, <a href="/node/143">a computer with a finite memory is <em>not</em> a finite state machine</a>, it can only be modeled as such.</p>
<p>While I have criticized the first assumption philosophically, the second assumption can easily be scrutinized by having a good look at the history of computer science. Specifically, I will complement the findings of the historians Michael Mahoney [7, p.133] and Edgar Daylight [5, Ch.3] and support the following thesis: computer scientists mathematically modeled actual computations in diverse ways, and many of them did not resort to the Turing-machine model of computation (or to any other Turing-equivalent model). In a word, there never has been a standard model of computation throughout the history of computer science; it is the quest for such a model and not the model itself that brought many computer scientists together.</p>
<p> </p>
<p><strong>Revisiting the Gatekeeper</strong></p>
<p>That, then, was an excerpt from my rejected POPL paper. The extract conveys Minsky's 1967 stance with regard to the unsolvability of the halting problem. The question of interest here is whether Minsky intentionally fused the two categories of mathematical programs and computer programs. Perhaps he was merely crafting his sentences with brevity in mind and I am making too much of all this.</p>
<p>My scrutiny of the writings of <a href="/node/139">John Reynolds</a>, <a href="/node/140">Michael Hicks</a>, <a href="/node/141">Andrew Appel</a>, <a href="/node/142">Dave Parnas, Swarat Chaudhuri</a>, <a href="/node/143">CACM reviewers</a>, and Marvin Minsky, at least hints at the possibility that epistemic pluralism is a major feature of computer science too. It is precisely Raymond Turner and like-minded scholars who are starting to lay the philosophical foundations of our emerging field. The POPL Gatekeeper, however, has expressed a very different opinion:</p>
<blockquote><p>Most criticisms of particular authors and their published claims seem dubious. I think these authors really do understand all the relevant issues and have just crafted certain sentences with brevity in mind. It doesn't work in practice to footnote every sentence with a review of all of its philosophical foundations!</p>
</blockquote>
<p>It works to be precise at least once. Conflations abound. Or, to borrow from Timothy Colburn's analysis, here's what Tony Hoare wrote in 1986:</p>
<blockquote><p>Computer programs are mathematical expressions. They describe, with unprecedented precision and in the most minute detail, the behavior, intended or unintended, of the computer on which they are executed.</p>
</blockquote>
<p>This quote, which I have copied from page 132 in Colburn's book <a href="https://www.amazon.com/Philosophy-Computer-Science-Bureaucarcies-Administration/dp/156324991X"><em>Philosophy and Computer Science</em></a>, is reminiscent of <a href="/node/142">Chaudhuri's words which Parnas</a> and subsequently I, too, have scrutinized. Or, to use Peter Naur's 1989 words, as cited by Colburn on page 147 in his book <em>Philosophy and Computer Science</em>:</p>
<blockquote><p>It is curious to observe how the authors in this field, who in the formal aspects of their work require painstaking demonstration and proof, in the informal aspects are satisfied with subjective claims that have not the slightest support, neither in argument nor in verifiable evidence. Surely common sense will indicate that such a manner is scientifically unacceptable.</p>
</blockquote>
<p>One such subjective claim is, once again, that “full formal verification” is possible; that is,</p>
<blockquote><p>“full formal verification, the end result of which is a proof that the code will always behave as it should. Such an approach is being increasingly viewed as viable.” — citing <a href="/node/140">Michael Hicks</a></p>
</blockquote>
<p>If the POPL Gatekeeper were to “really understand all the relevant issues” raised in my rejected POPL paper him- or herself, then s/he would either agree with me or with Michael Hicks but not both. The Gatekeeper can't have both ways.</p>
<p> </p>
<p><strong>Pluralism is a Virtue</strong></p>
<p>Researching the history and philosophy of our young field already has practical value today. Grasping the <em>multitude</em> of definitions of a “computer program” leads to conceptual clarity and, specifically, to an increased understanding of seemingly conflicting views on computer science. For example the <a href="/node/142">Parnas-Chaudhuri exchange</a> is essentially a clash between definitions 1. and 3., presented at the beginning of this blog post. Each actor associates a different meaning to the term “computer program.”</p>
<p>Appreciating the plurality of “computer program” definitions can lead to better computer engineering practices. For instance, Byron Cook’s article ‘Proving Program Termination’ [2] and Daniel Kroening and Ofer Strichman’s book, <em>Decision Procedures: An Algorithmic Point of View</em> [6], together present two complementary views on what a “computer program” entails in the present century. For Cook, the variables in the following program text range over integers with infinite precision and, therefore, follow Strachey’s 1973 programming view and show some resemblance with Minsky’s account on page 153 of his 1967 book.</p>
<p>x : = input(); <br />y : = input(); <br />while x > 0 and y > 0 do <br /> if input() = 1 then x : = x - 1;<br /> else y : = y - 1; fi <br />done</p>
<p>Kroening and Strichman, by contrast, present an alternative perspective in which all program variables (in the above program text) are defined over finite-width integers and this aligns more with the view held by Dijkstra (1973) and with Minsky’s account on page 25. The implication is that the software tools built by Cook differ in fundamental ways from those developed by Kroening and Strichman. (A descent analysis of Peter Naur’s writings will reveal yet another view on programming.) Good engineers today benefit from this pluralism by using the tools provided from <em>both</em> camps. They will not object if they are requested to use different semantic models for the same C computer program — a point that <a href="/node/139">I have tried to stress before</a>. </p>
<p> </p>
<p><strong>References</strong></p>
<p>[1] Bensaude Vincent, ‘Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences,’ 44:2, 122–129, 2013.<br />[2] Cook, Podelski, Rybalchenko, ‘Proving Program Termination,’ CACM, 54:5, 88–98, 2011.<br />[3] M. Davis. <em>Computability and Unsolvability</em>. McGraw-Hill, New York, USA, 1958.<br />[4] Hoare, Allison, Incomputability, ACM Comp. Surveys 4.3, 169–178, 1972.<br />[5] D. Knuth and E. Daylight. <a href="http://www.lonelyscholar.com/knuth2"><em>Algorithmic Barriers Falling: P=NP?</em></a> Lonely Scholar, Geel, 2014.<br />[6] Kroening, Strichman, <em>Decision Procedures: An Algorithmic Point of View</em>, Springer, 2008.<br />[7] M. Mahoney. <em>Histories of Computing</em>. Harvard University Press, Cambridge, Massachusetts/London, England, 2011.<br />[8] M. Minsky. <em>Computation: Finite and Infinite Machines</em>, Prentice-Hall, 1967.</p>
</div></div></div><section class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above view-mode-rss"><h2 class="field-label">Tags: </h2><ul class="field-items"><li class="field-item even" rel="dc:subject"><a href="/category-mistakes" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Category Mistakes</a></li></ul></section>Tue, 11 Oct 2016 14:58:11 +0000egdaylight144 at http://dijkstrascry.comhttp://dijkstrascry.com/node/144#commentsThe POPL Gatekeeper
http://dijkstrascry.com/node/143
<section class="field field-name-field-histdate field-type-text field-label-inline clearfix view-mode-rss"><h2 class="field-label">Dated: </h2><div class="field-items"><div class="field-item even">6 October 2016</div></div></section><div class="field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss"><div class="field-items"><div class="field-item even" property="content:encoded"> <p>There is an obvious distinction to be made between computers (which include laptops and iPads) on the one hand and mathematical models on the other hand. Strictly speaking, it is wrong to say that “a computer <em><span style="text-decoration: underline;">is</span></em> a finite state machine.” That's like <em>speaking about a mathematical model as if it coincides with reality</em>. Unfortunately, it is unusual to make these kinds of observations explicit in computer science, as I am doing now and as I have done in my paper entitled `Category Mistakes in Computer Science at Large.'</p>
<p>My paper on Category Mistakes was rejected by POPL referees for some very good reasons (albeit non-technical ones). I've <a href="/node/142">introduced the POPL Gatekeeper in my previous post</a> and the objective here is to discuss the following remark made by the Gatekeeper:</p>
<blockquote><p>I'm not sure why CACM reviewers would ignore the difference between real-world systems and their mathematical models. I don't actually see that mistake in the quoted reviews.</p>
</blockquote>
<p>I received this genuine comment on October 3rd, 2016. In my rejected POPL paper, I present quoted reviews from anonymous referees of the Communications of the ACM (CACM) — reviews that I received on January 24th, 2016, after having submitted a prior version of my rejected POPL paper to the CACM in the fall of 2015. (For the sociological record: the reviews received from the CACM are much more elaborate, they convey more <em>how</em> each reviewer thinks. The CACM referees have provided pages of feedback. In retrospect, this is perhaps to be expected. The CACM is after all a journal while “POPL” refers to an annual conference which has a well-defined scope of research.)</p>
<p>As I've mentioned in my previous blog post, I don't agree with the Gatekeeper's assessment. In my words:</p>
<blockquote><p>I'm afraid reviewers of the CACM <em>have</em> applied faulty reasoning and I also believe I have illustrated <em>precisely that</em> in my POPL paper (but that's a topic I will address another day).</p>
</blockquote>
<p>I will address that topic now. I will, however, only discuss a <em>strict subset</em> of all the CACM reviews that I cover in my rejected POPL paper. I will thus only present the basics here and leave the really interesting stuff for later.</p>
<p><strong>Computers vs. Mathematical Models</strong></p>
<p>The categorical distinction between computers and mathematical models often goes unnoticed. A computer <em>is not</em> a finite state machine. The former is a <em>concrete physical</em> object and it can be <em>mathematically modeled</em> by the latter, which is an <em>abstract</em> object. Well, here then is the first comment that I have received from a referee of the CACM:</p>
<blockquote><p>My laptop is a universal Turing machine, but its tape size is of course limited by the finiteness of human resources.</p>
</blockquote>
<p>If you limit the tape size of a universal Turing machine, you may end up with, say, a linear bounded automaton or even a machine that is computationally equivalent to a finite state machine. You thus end up with another <em>mathematical</em> model of computation but <em>not</em> with a laptop (i.e., a concrete physical object). To be more precise:</p>
<blockquote><p>You <span style="text-decoration: underline;">can't</span> use <em>human</em> resources to limit the size of a <em>mathematical</em> object, i.e., the tape. Note that the “tape” indeed denotes a mathematical object and <em>not</em> a physical object, contrary to what the word “tape” seems to suggest.</p>
</blockquote>
<p>You can introduce mathematical restrictions to limit the size of a mathematical object. Likewise, you can use human resources to limit the size of a concrete physical object (such as a laptop). But, once again:</p>
<blockquote><p>A Turing machine <em>is</em> a mathematical object, it is <em>not</em> a computer. This is contrary to what the word “machine” seems to suggest.</p>
</blockquote>
<p>I know where the CACM reviewer is coming from. I, too, have been educated as a computer scientist and also used to speak about my mathematical model as if it coincides with reality. The right way to put it, once again, is as follows:</p>
<blockquote><p>Placing finite bounds on an abstract object (Turing machine) does not make it a concrete physical object (laptop). Instead, it results in another abstract object (e.g., a linear bounded automaton or a finite state machine) that can potentially serve as another mathematical model for the physical object at hand.</p>
</blockquote>
<p>I agree that these words convey a very trivial distinction. But missing the distinction can easily lead to faulty reasoning. For example, it makes no sense to say that a laptop is Turing complete. Only a mathematical model of computation can be Turing complete. Likewise, it makes no sense to question whether your iPhone is Turing complete or not. Unfortunately, these statements can be found all over the place, not only in peer reviews but also in articles and in books, published by reputable publishers. (I discuss several peer reviewed articles in my rejected POPL paper.) I've even had discussions with colleagues who start proving on the blackboard that my laptop is Turing complete. They really think they are giving a <em>mathematical</em> proof. As I emphasize in my (often rejected) writings:</p>
<blockquote><p>It is the mathematical model of a laptop that may or may not be Turing complete, not the laptop itself. Yet, many computer scientists disagree with this statement and erroneously place both objects in the same category. This is where a category mistake occurs.</p>
</blockquote>
<p>Comparing a laptop with a Turing machine is only warranted with the proviso that we all agree we are reasoning across <em>separate</em> categories.</p>
<p>Likewise, and as I have already illustrated to some extent in my previous blog posts, it makes no sense to claim, nor to attempt to mathematically prove, any of the following:</p>
<ol><li>The computer programming language C is Turing complete.</li>
<li>The Halting Problem of computer programs residing in my laptop (or any laptop for that matter) is unsolvable.</li>
</ol><p>I understand that many programming language experts view a “programming language” as a mathematical object. That's why I want to explicitly distinguish between a “computer programming language” and a “mathematical programming language.” I take C to be a computer programming language and I know that many computer scientists (= typically non programming language experts) who defend claim 1. do so too and, therefore, make a category mistake. In fact, they often simply do not distinguish between the computer programming language and their mathematical model. That is the main point I am repeatedly trying to make.</p>
<p><strong>Abusing the Halting Problem</strong></p>
<p>Grasping the significance of the observations made so far is not easy for here is yet another response that I have received from a referee of the CACM and which I have also reported in my rejected POPL paper:</p>
<blockquote><p>What does the undecidability proof of the halting problem for computer programs actually tell us. Like diagonalization proofs in general it may be viewed finitely as saying that, if there is a bound M on the size of accessible computer memory, or on the size of computer programs, or any other resource, then no computer program subject to the same resource bounds can solve the problem for all such computer programs.</p>
</blockquote>
<p>The previous remark and the follow-up remark, presented below, are only correct if we accept the following two assumptions (each of which is wrong):</p>
<ol><li>A computer program is a synonym for a mathematical program. </li>
<li>The mathematical program (mentioned in the previous sentence) has to be equivalent to a Turing machine program and not to, say, a primitive recursive function. </li>
</ol><p>The reason why the second assumption has to hold is merely because the referee is referring to the halting problem of Turing machines. Continuing:</p>
<blockquote><p>If computer program A solves correctly all halting problems for computer programs respecting bound M, then the counterexample computer program T must exceed that bound, which is why A fails for T. To solve problems of computer programs, one needs an ideal program.</p>
</blockquote>
<p>The quote hints at a distinction that has to be made between finite and infinite objects (with the latter being labeled “ideal”) but the categorical distinction between computer programs and mathematical programs goes unnoticed. Again, this is where a category mistake occurs. The undecidability proof of the halting problem is about <em>mathematical</em> programs only, and not about <em>computer</em> programs. The diagonal argument can only be applied to mathematical objects. So, to be frank, the referee thinks s/he is giving a mathematical argument but, in fact, s/he is demonstrating faulty reasoning. S/he is <em>not</em> proving something about <em>computer</em> programs but about a *particular* mathematical model of a computer program! So much for mathematical rigor.</p>
</div></div></div><section class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above view-mode-rss"><h2 class="field-label">Tags: </h2><ul class="field-items"><li class="field-item even" rel="dc:subject"><a href="/category-mistakes" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Category Mistakes</a></li></ul></section>Thu, 06 Oct 2016 09:51:50 +0000egdaylight143 at http://dijkstrascry.comhttp://dijkstrascry.com/node/143#commentsParnas and Chaudhuri
http://dijkstrascry.com/node/142
<section class="field field-name-field-histdate field-type-text field-label-inline clearfix view-mode-rss"><h2 class="field-label">Dated: </h2><div class="field-items"><div class="field-item even">4 October 2016</div></div></section><div class="field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss"><div class="field-items"><div class="field-item even" property="content:encoded"> <p>The topic of `Category Mistakes,' which I have discussed in my four previous blog posts is not limited to digital systems. Engineers in all areas use models and they, too, can potentially fall into the trap of mistaking the model for reality.</p>
<p>Dave Parnas told me in August 2016 that one of his favorite examples in this regard is `Ohm's Law,' which states that V = I*R. In reality, the ratio between current (I) and voltage (V) varies, i.e., the relationship is not actually linear. Engineers can use this law to analyze a circuit and verify that it has the desired behavior only to find that the circuit fails in certain conditions. Parnas gives the following example:</p>
<blockquote><p>The circuit may overheat and the resistance (R) will change. Ohm’s Law can be made an accurate description of the device by adding conditions, e.g. V-I*R < Delta if a < I < b etc. The result will be much more complex but it will give conditions under which the model’s conditions are accurate. Even this is a lie as it makes assumptions about ambient temperature and the aging of the components that are not stated.</p>
</blockquote>
<p>Parnas's main point is that engineers must be aware of the limitations of models as descriptions. More on Parnas's views can be found in <a href="http://www.lonelyscholar.com/node/17">the first panel discussion</a> held at ETH Zurich in November 2010. Parnas concludes by noting that this awareness is completely missing in the UML (Unified Modeling Language) and MDE (Model-Driven Engineering) literature. Can the same be said about the programming language literature?</p>
<p>Category Mistakes are also typically found in physics. A colleague in computer science recently explained to me that several publications contain statements of the following kind:</p>
<blockquote><p>Goedel proved that time travel is possible.</p>
</blockquote>
<p>What Goedel did prove is that time travel is in principle possible according to his mathematical model (i.e., with his chosen equations). Of course, one could argue that physicists frequently choose to craft their sentences with brevity in mind and that they <em>are</em> well aware of the fact that they are modeling in the first place. This is precisely the kind of response I received from a POPL referee with regard to my rejected paper, entitled: `Category Mistakes in Computer Science at Large.' When I give this kind of response to my colleague in connection with the previous statement about Goedel, he insists that some physicists really do believe that Goedel proved that time travel is possible. As I have argued in my previous blog posts, I believe the same can be said about computer scientists.</p>
<p>Similar remarks can be made about several other disciplines, including linguistics and mathematics. Concerning the latter, here is what <a href="http://pauli.uni-muenster.de/~munsteg/arnold.html">V.I. Arnold said in 1997</a>:</p>
<blockquote><p>The mathematical technique of modeling consists of [...] <em>speaking about your deductive model in such a way as if it coincided with reality</em>. The fact that this path, which is obviously incorrect from the point of view of natural science, often leads to useful results in physics is called “the inconceivable effectiveness of mathematics in natural sciences” (or “the Wigner principle”). [my emphasis]</p>
</blockquote>
<p>Likewise, researchers in computer science actually believe the following statement:</p>
<blockquote><p>“full formal verification, the end result of which is a proof that the code will always behave as it should. Such an approach is being increasingly viewed as viable.” — <a href="/node/140">citing Michael Hicks</a></p>
</blockquote>
<p>So Hicks says we can have a mathematical proof about the behavior of an engineered system. Not so. The best we can do is prove mathematical properties about <em>some</em> mathematical model of <em>the</em> running system. And doing so can, indeed, have great practical value. Moreover, I advocate — and I also believe this is happening (and always has been happening) in engineering — using <em>multiple</em> (and complementary) semantic models for the <em>same</em> digital technology; that is, for the same C computer program. Doing that will allow us, engineers, to have some more confidence that the code behaves “as it should.”</p>
<p> </p>
<p><strong>1. The POPL Gatekeeper</strong></p>
<p>It did not come as a surprise to me when I received an email on October 3rd, 2016, explaining that my POPL paper `Category Mistakes in Computer Science at Large' was officially rejected. Nor do I in any way wish to ridicule the peer review process, which I consider to be professional and honest. I firmly believe the reviewer did a very good job in:</p>
<ol><li>Justifiably preventing my unorthodox analysis from entering the POPL'17 proceedings. </li>
<li>Protecting <em>me</em> from making a fool of myself in an auditorium full of programming language experts.</li>
</ol><p>That said, I still think that the POPL referee did not fully comprehend my analysis (which, again, does not mean that s/he should have accepted my paper for publication). It should also be remarked that there was a second reviewer who rejected my paper too. His or her comments were very brief and I shall therefore postpone discussing them. I will disseminate my rejected POPL paper along with the two reviews in the near future. Why not? Seeking a dialogue between different groups of scholars can lead to <em>conceptual clarity</em>. Here's a time line of events:</p>
<ol><li>In the first week of July 2016, I submitted my POPL paper (to be disseminated later).</li>
<li>On September 14th, 2016, I received two peer reviews (to be disseminated later).</li>
<li>On the same day I wrote and submitted a short rebuttal (presented below).</li>
<li>On October 3rd, 2016, I received the official rejection of my POPL paper, along with a reply (presented below) from the first reviewer pertaining to my rebuttal. </li>
</ol><p> </p>
<p><strong>My Rebuttal (September 14th, 2016)</strong></p>
<blockquote><p>The reviews are very good and informative.</p>
<p>I am willing to believe the first reviewer when he says that my findings are already well understood in the POPL community. It should be remarked though that the CACM community, by contrast, fundamentally disagrees with my "category mistakes" and thus also with the reviewer's views. What to do about this?</p>
<p>Moreover, multiple POPL members still tell me that it *is* possible to fully verify a digital *system*, even after having studied my work in detail. So the reviewer's claim about the POPL community does not mix well with my daily experience. Please tell me what I can do about this? Should I abandon this research topic?</p>
<p>I do take gentle issue with the way the first reviewer frames my works: I am not presenting "anecdotes" nor am I just citing a few papers. Clearly my reference to Parnas and Chaudhuri backs up *my* story, not the reviewer's.<strong><br /></strong></p>
</blockquote>
<p> </p>
<p><strong>Reply by Reviewer #1 (October 3rd, 2016)</strong></p>
<blockquote><p>I'm not sure why CACM reviewers would ignore the difference between real-world systems and their mathematical models. I don't actually see that mistake in the quoted reviews. The standard convention is that a reference to a programming language refers to the abstract mathematical object, which is unproblematic, since today it is routine to define full-fledged languages unambiguously.</p>
<p>I don't think the Parnas-Chaudhuri exchange is problematic, because Parnas is not a member of the POPL community and likely ran into similar problems interpreting papers from it.</p>
</blockquote>
<p> </p>
<p><strong>My analysis (October 4th, 2016)</strong></p>
<p>Four quick points:</p>
<p>1. The reviewer is quite right in stressing that “today it is routine to define full-fledged languages unambiguously” and perhaps I miss-portrayed (albeit in just one or two sentences) the accomplishments of the POPL community in my rejected paper.</p>
<p>2. I'm afraid reviewers of the CACM <em>have</em> applied faulty reasoning and I also believe I have illustrated <em>precisely that</em> in my POPL paper (but that's a topic I will address another day).</p>
<p>3. Reference to a programming language as an abstract mathematical object is only “unproblematic” if we keep in mind that it is <span style="text-decoration: underline;"><em>a</em></span> mathematical model of <em>the</em> technology under scrutiny. That's why I explicitly distinguish between a mathematical programming language and a computer programming language. They are categorically distinct. Furthermore, assuming by convention that there is a 1-to-1 relation between a computer program and a mathematical program is fine. But using that premise to project results from, say, computability theory onto engineering is often wrong and misleading. (See my previous four posts for more information, starting with the first one: <a href="/CategoryMistakes">here</a>. I think <a href="/node/140">my post about Michael Hicks</a> makes this point clearly.)</p>
<p>4. The reviewer sidesteps my claim that “multiple POPL members still tell me that it *is* possible to fully verify a digital *system*, even after having studied my work in detail.” I am willing to bet that the reviewer has no clue what I am talking about; i.e., that s\he has not understood my paper very well after all. Of course, that is first and foremost <em>my</em> mistake. I am, after all, still struggling to find the right words and tone to get my message across to the POPL community.<strong> </strong></p>
<p>But what I want to discuss here and now is the implicit statement that Dave Parnas and I are “non members of the POPL community” (which is OK) and <strong>therefore</strong><em> <span style="text-decoration: underline;">we</span> have misinterpreted</em> — and continue to misinterpret — the work of Chaudhuri and other POPL members (which is not OK). Talking about the elite of computer science! From a sociological angle, that's a rather impressive remark. It fits right in the 2004 book <a href="https://mitpress.mit.edu/books/mechanizing-proof"><em>Mechanizing Proof: Computing, Risk, and Trust</em></a>, written by the sociologist Donald MacKenzie. Instead, the POPL community could welcome the scrutiny of Parnas and like-minded engineers in order to prevent two big communities (software engineering and computer science) from drifting further apart.</p>
<p>In retrospect, surely Chaudhuri et al. grasp the difference between a computer program and a mathematical program. (After all, who would not be able to make such a simple distinction?) Moreover, based on the extensive feedback from the first reviewer (not shown in the present blog post), I am inclined to re-phrase several sentences of my rejected POPL paper. I will only do so in future writings and intentionally present my original and imperfect narrative in the sequel.</p>
<p>That all being said, and as you can judge for yourself below, it is Dave Parnas who seeks conceptual clarity and who <em>does understand</em> what the POPL community is doing. Parnas is asking Chaudhuri et al. to be more precise in the way they formulate their research findings and not to oversell their mathematical results. It is Parnas who wants to differentiate between the “computer program” and its mathematical model(s) while Chaudhuri et al. apparently do not want to abide with him. </p>
<p>As promised, now follows the introduction of my unorthodox and rejected POPL paper.<strong> </strong></p>
<p> </p>
<p><strong>2. The Introduction of my Rejected POPL Paper</strong></p>
<p>In his 1985 paper `The limits of correctness' [39], Brian Cantwell Smith discussed an important issue concerning program verification. He pointed out that a program always deals with the world through a model, it cannot deal with the world directly. There always is a layer of interpretation in place. Therefore, if we define a correct program as a program that does what we intended it to do, as opposed to what we formally specified it to do, proving a program correct is intrinsically very difficult. (The word “we” refers to my former student, Rosa Sterkenburg, and myself. Rosa is a co-author of the present introduction but carries no responsibility for the rest of this article.)</p>
<p>To illustrate the difficulty underlying Smith's <em>world-model</em> relation, assume we have a mathematical model of the world in which all apples are modeled as green and all strawberries as red. With this model it is quite simple to distinguish an apple from a strawberry, simply use the color to differentiate between the two kinds of fruit. If we thus write a program that can discriminate between red and green, then, at least with respect to our model, we can mathematically prove that our program can indeed distinguish between apples and strawberries. However, in the real world there exist red apples as well. So our model is not perfect; our proven correct program will tell us a red apple is a strawberry. As Smith put it:</p>
<blockquote><p>Just because a program is ‘proven correct’, you cannot be sure that it will do what you intend. [39, p.18]</p>
</blockquote>
<p>Smith argued that it is necessary for our model of the world, or <em>any</em> mathematical model for that matter, to be incorrect in some ways. For, if a model were to capture everything, it would be too complex to work with [39, p.20-21]. In the apple and strawberry example, the model need not capture the molecular structure of the fruits; doing so would presumably over-complicate matters for the application at hand. We need models to abstract away irrelevant — and, unfortunately, also relevant — aspects of the world in order to make an industrial problem mathematically tractable.</p>
<p>More important for the present article is the similar, yet slightly different, <em>programming-model</em> relation; that is, the relation between a programming language and the computer programs written in that language on the one hand and their mathematical counterparts on the other hand. This is a relation that Smith did not address in his paper. When proving a property about a computer program, such as deadlock prevention, some mathematical model of that program is used instead of the program itself. As a result, our formal proof about the program tells us something about it's model, and only indirectly something about the program. Here, too, we have the problem that our model is not perfect for it relies on one or more key abstractions from what is specified in the programming language's manual. Depending on what is written in that manual and how we choose to model the language at hand, two possible — and very different — key abstractions could be:</p>
<ul><li>abstraction A_{I}^{pos}, which allows for the representation of <em>arbitrary large positive integers</em>, and</li>
<li>abstraction A_{R}, which allows for the representation of <em>infinite precision real numbers</em>.</li>
</ul><p>(I could equally well consider some related issues — such as (1) an unbounded number of memory locations that can be addressed, and (2) an unbounded length of programs that can be compiled — but choose not to do so here.)</p>
<p>Moreover, the language manual, as David Harel puts it nicely in his book <em>The Science of Computing</em>, contains “a wealth of information concerning both syntax and semantics, but the semantical information is usually not sufficient for the user to figure out exactly what will happen in each and every syntactically legal program” [23, p.52-53]. Defining a formal syntax of a programming language is not considered a major problem anymore since the advent of the Backus-Naur Form (BNF) notation, but finding adequate formal semantics has remained a vexing research topic from the early 1960s up till this day.</p>
<p>In the present exposition a “mathematical model” of a program (or programming language) refers to both the formal syntax and a formal semantics of the program (or programming language) at hand, although the formal syntax will often go unmentioned. It should be noted that in model theory, by contrast, numerals and other expressions (cf. syntax) are <em>modeled by</em> the natural numbers and other mathematical objects (cf. semantics).</p>
<p>To recapitulate, a mathematical model of a computer program is imperfect in one or more ways for its semantics relies on one or more key abstractions. As a result, and this is an important point for the rest of the paper, <em>showing that something cannot be done according to a model does not necessarily imply that it cannot be done in practice</em>. Of course, one's formal semantics may become the standard definition of the programming language under study, but that does not invalidate the italicized statement made in the previous sentence.</p>
<p><strong>The Chaudhuri-Parnas Dialogue</strong></p>
<p>To already illustrate some of the issues just raised, we briefly turn to two recent writings of Swarat Chaudhuri [5,6]. Chaudhuri has studied properties of programs that are necessary conditions for a correct program to possess in his mathematical framework. Starting with his 2012 article `Continuity and Robustness of Programs' [5], Chaudhuri and his co-authors, Gulwani and Lublinerman, discuss a way to prove mathematical continuity of programs. A continuous program is not per se correct, but a correct program needs to be robust, and therefore, according to the authors it needs to be continuous.</p>
<p>Chaudhuri, Gulwani, and Lublinerman do not address the world-model relation nor the programming-model relation in their 2012 article. While the first relation does not seem very relevant to us either with regard to their article, the second relation is, we believe, significant. To put it candidly, Chaudhuri et al. do not seem to be aware that they are not proving something about their programs, but that they are proving something about some model of their programs. This is very clearly pointed out by David Parnas in a reaction to their article. Parnas says:</p>
<blockquote><p>Rather than verify properties of an actual program, it examined models of programs. Models often have properties real mechanisms do not have, and it is possible to verify the correctness of a model of a program even if the actual program will fail. [36]</p>
</blockquote>
<p>Parnas specifically describes what he thinks is problematic about the mathematical assumptions underlying Chaudhuri et al.'s model:</p>
<blockquote><p>The article ignored the problem by both declaring: ‘...our reals are infinite-precision’ and not specifying upper and lower bounds for integers. [36]</p>
</blockquote>
<p>Because of these abstractions, which include what we have called abstractions A_{R} and A_{I}^{pos}, Parnas concludes in a similar way to what Smith wrote in 1985. Proving something correct does not mean it will do what you want. “Some programs,” Parnas continues,</p>
<blockquote><p>can be shown to be continuous by Chaudhuri’s method but will exhibit discontinuous behavior when executed. [36]</p>
</blockquote>
<p>Chaudhuri et al. reply to this critique as follows:</p>
<blockquote><p>From a purely mathematical perspective, any function between discrete spaces is continuous, so all <em>computer</em> programs are continuous. [36, my emphasis]</p>
</blockquote>
<p>This reaction suggests that Chaudhuri et al. are not aware that considering a computer program as a mathematical function is a modeling step. A computer program is — and this will become our main conceptual point — categorically different from a mathematical program, even if the latter relies solely on finite abstractions. That is, a C program residing in a laptop (i.e., a computer program) is categorically distinct from its mathematical model, even if the semantics is a “real semantics,” i.e., a “detailed and cumbersome low-level model” of C. (The words just cited come from Harvey Tuch et al. [43] and we shall see in Section 5 whether Tuch et al. agree with our categorical distinction or whether they reason like Chaudhuri et al.)</p>
<p>In a later paper entitled `Consistency Analysis of Decision-Making Programs' [6], Chaudhuri, Farzan, and Kincaid do, however, address some of the issues raised by Smith in 1985. They write:</p>
<blockquote><p>Applications in many areas of computing make discrete decisions under uncertainty, for reasons such as limited numerical precision in calculations and errors in sensor-derived inputs. [6, p.555] </p>
<p>Since the understanding of nontrivial worlds is often partial, we do not expect our axioms to be complete in a mathematical sense. [6, p.557]</p>
</blockquote>
<p>These words are related to the model-world relation, which was not addressed in Chaudhuri's paper about continuity. With this remark Chaudhuri acknowledges that mathematical modeling of the real world is needed to get things done on a computer. Chaudhuri also indicates that this understanding usually is not perfect, which is close to Smith’s point that one can never aspire having a perfect mathematical model. That said, Chaudhuri does not address the stringent programming-model relation and, hence, does not respond to Parnas's criticism.</p>
<p>Smith, Parnas, and some other software scholars consistently and sometimes explicitly distinguish between <em>concrete physical objects</em> such as a laptop, <em>technical artefacts</em> such as the programming language C, and <em>mathematical models</em> such as a finite state machine and a universal Turing machine. According to them, both a laptop and a finite state machine are finite objects, but the former is physical while the latter is mathematical. Likewise, equating a laptop with a universal Turing machine is problematic, not primarily because the former is finite and the latter is infinite, but because the former moves when you push it and smells when you burn it while the latter can neither be displaced nor destroyed.</p>
<p> </p>
<p><strong>References</strong></p>
<p>[5] S. Chaudhuri, S. Gulwani, and R. Lublinerman. Continuity & robustness of programs. Communications of the ACM, 55(8):107–115, 2012. </p>
<p>[6] S. Chaudhuri, A. Farzan, and Z. Kincaid. Consistency analysis of decision-making programs. In Proceedings of the 41st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pages 555–567, 2014.</p>
<p>[23] D. Harel. <em>The Science of Computing: Exploring the Nature and Power of Algorithms</em>. Addison-Wesley, 1987, 1989.</p>
<p>[36] D. Parnas. On Proving Continuity of Programs. Letter to the Editor of the Communications of the ACM, 55(11):9, November 2012.</p>
<p>[39] B. Smith. The limits of correctness. ACM SIGCAS Computers and Society, 14,15:18–26, January 1985.</p>
<p>[43] H. Tuch, G. Klein, and M. Norrish. Types, Bytes, and Separation Logic. In Principles of Programming Languages, 2007.<strong><br /></strong></p>
</div></div></div><section class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above view-mode-rss"><h2 class="field-label">Tags: </h2><ul class="field-items"><li class="field-item even" rel="dc:subject"><a href="/category-mistakes" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Category Mistakes</a></li></ul></section>Tue, 04 Oct 2016 11:33:19 +0000egdaylight142 at http://dijkstrascry.comhttp://dijkstrascry.com/node/142#commentsDeep Specification and Andrew Appel
http://dijkstrascry.com/node/141
<section class="field field-name-field-histdate field-type-text field-label-inline clearfix view-mode-rss"><h2 class="field-label">Dated: </h2><div class="field-items"><div class="field-item even">25 September 2016</div></div></section><div class="field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss"><div class="field-items"><div class="field-item even" property="content:encoded"> <p>I'm reading about “the science of deep specification.” This grand research project is led by Andrew Appel and some other big names on the east coast of the USA. It is only after having read <a href="http://deepspec.org/research/">Appel et al.'s on-line research overview</a> again today that I am now 100% convinced that the vast majority of researchers in program verification have not taken the simple objections made by <a href="http://dl.acm.org/citation.cfm?id=48530">James Fetzer</a> (1988) and <a href="https://mitpress.mit.edu/books/mechanizing-proof">Donald MacKenzie</a> (2004) seriously. Again, perhaps they have some very good reasons not to have done so and then I'd like to know why. (By the way, MacKenzie's book also covers some beautiful accomplishments of Andrew Appel's father.)</p>
<p>Andrew Appel and other computer scientists often refer to a <em>computer</em> program while they are actually referring to another representation of <span style="text-decoration: underline;"><em>a</em></span> mathematical model of <em>the</em> computer program at hand. I use the term “<em>mathematical</em> program” as a synonym for “mathematical model of a computer program.” So while a computer program can represent a mathematical program, so can</p>
<ul><li>a text-on-paper representation (called a <em>paper</em> program) and</li>
<li>a text-on-screen representation (called a <em>screen</em> program) </li>
</ul><p>represent the same mathematical program.</p>
<p>For example, the following is projected onto your screen and therefore it is a screen program:</p>
<pre><code><span class="p">(</span><span class="nf">defun</span> <span class="nv">factorial</span> <span class="p">(</span><span class="nf">n</span><span class="p">)</span>
<span class="p">(</span><span class="k">if </span><span class="p">(</span><span class="nb">= </span><span class="nv">n</span> <span class="mi">0</span><span class="p">)</span>
<span class="mi">1</span>
<span class="p">(</span><span class="nb">* </span><span class="nv">n</span> <span class="p">(</span><span class="nf">factorial</span> <span class="p">(</span><span class="nb">- </span><span class="nv">n</span> <span class="mi">1</span><span class="p">)))</span> <span class="p">)</span> <span class="p">)</span></code></pre><p>Again, this textual representation, projected onto your screen, is a representation of a mathematical object, i.e., a mathematical program. (One can perhaps also say that it is a textual representation of a computer program. I definitely take no issue with that viewpoint. But, in order to avoid confusion, I will not use the verb "to represent" in that manner.) One might be tempted to say that the textual representation <em>is</em> a computer program. This is frequently not a problem but it is in the current setting where formal methodists, themselves, want us to be merciless precise. So let's indeed try to be crystal clear.</p>
<p>The paper program, the screen program, and the computer program are three different representations of the mathematical model. (They are three different <a href="/node/139">technical artefacts</a>.) The first and second representation, intended for humans to read, are very different from the third. A computer program resides electronically in a specific computer and is (a) what we all would like to get “correct” and (b) what we all have difficulty comprehending because it is an artefact engineered for a digital machine.</p>
<p>Coming now to Appel et al.'s research overview. The authors state that</p>
<blockquote><p>Until recently "program verification logics have not been `live:' they have been about models of the program."</p>
</blockquote>
<p>But any <em>mathematical</em> endeavor such as program verification can at best be about mathematical models of the engineered system at hand. That system, in turn, can contain one or more computer programs. Each computer program can be modeled in one or more ways by a mathematically-inclined researcher. Likewise, one can mathematically model a <em>running</em> computer program by resorting to, say, an operational semantics of the computer program under scrutiny. But how can mathematics become `live'? What, exactly, is this supposed to mean?</p>
<p>Continuing with Appel et al.'s words:</p>
<blockquote><p>Program verification logics are now “connected directly and mechanically to the program”.</p>
</blockquote>
<p>Notice that Appel et al. oftentimes use the word “program” (such as the last word in the previous sentence) as an abbreviation for “computer program” — i.e., the real mechanical stuff. But the first occurrence of the word “Program” in the previous quote has to refer to a mathematical program! For, how could you prove a logical statement without having a semantic model of the engineered computer program? Or, to be more precise, the first occurrence of “Program” has to refer to a textual representation of Appel et al.'s chosen mathematical program while the last occurrence has to refer to an electronic — humanly unreadable — representation, i.e., a computer program.</p>
<p>In other words, the authors conflate their textual representation of their mathematical program and an electronic representation of that same mathematical program (i.e., the computer program that we all want to get “correct”).</p>
<p>Appel et al. are basically saying what <a href="/node/139">John Reynolds</a> and <a href="/node/140">Michael Hicks</a> have said before, namely that full verification of a hw/sw system is possible, both in principle and in practice. My claim is that this is not possible in principle but that program verification nevertheless does have a lot to offer in practice, especially if formal methodists will get their terminology right when disseminating their research findings to, say, software engineers in industry.</p>
<p>Continuing with the words of Appel et al.:</p>
<blockquote><p>“we envision this network of verified software components, connected by deep specifications—specifications that are <em>rich, live, formal,</em> and <em>two-sided</em>.”</p>
</blockquote>
<p>Some of these italicized words I do not understand at all but perhaps it's just me. Continuing:</p>
<blockquote><p>“whatever properties are proved about a program will actually hold when the program runs”</p>
</blockquote>
<p>This latter sentence exemplifies a category mistake, essentially already pointed out by <a href="http://dl.acm.org/citation.cfm?id=48530">Fetzer in 1988</a> and refined by me as follows: One can mathematically prove a <em>mathematical property</em> on a mathematical model of a computer program. One cannot prove a mathematical property on a computer program, regardless of whether the latter is running or not. So the first occurrence of “program” in the previous quote refers to a mathematical program while the latter occurrence — “the program runs” — has to refer to a computer program.</p>
<p>To conclude, Appel and his fellow researchers fuse the category of mathematical objects (which includes mathematical programs) with the category of engineered systems (which includes computer programs). Perhaps there are very good reasons to do so and I am merely demonstrating my ignorance. In any case, the `science of deep specification' remains intriguing to me because I am not even able to grasp some of its basic principles.</p>
</div></div></div><section class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above view-mode-rss"><h2 class="field-label">Tags: </h2><ul class="field-items"><li class="field-item even" rel="dc:subject"><a href="/category-mistakes" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Category Mistakes</a></li></ul></section>Sun, 25 Sep 2016 13:12:34 +0000egdaylight141 at http://dijkstrascry.comhttp://dijkstrascry.com/node/141#commentsAnalysis Tools and Michael Hicks
http://dijkstrascry.com/node/140
<section class="field field-name-field-histdate field-type-text field-label-inline clearfix view-mode-rss"><h2 class="field-label">Dated: </h2><div class="field-items"><div class="field-item even">21 September 2016</div></div></section><div class="field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss"><div class="field-items"><div class="field-item even" property="content:encoded"> <p>Before really discussing <a href="/node/138">category mistakes</a> in computer science in follow-up posts, I will first continue testing my <a href="/node/139">aforementioned categories</a> on the writings of people I admire the most. I shall take a 2014 blog post, written by <a href="http://www.pl-enthusiast.net/2014/07/01/how-did-heartbleed-remain-undiscovered-and-what-should-we-do-about-it/">Michael Hicks</a> in connection with the Heartbleed bug. (His post was written on July 1st, 2014 and I accessed it on September 19th, 2016.)</p>
<p>To recapitulate, and based largely on the work of <a href="http://cswww.essex.ac.uk/staff/turnr/">Raymond Turner</a>, I distinguish between three separate categories:</p>
<ul><li>computers are <em>concrete physical objects</em>, </li>
<li>computer programs and also computer programming languages are <em>technical artefacts</em>, and </li>
<li>Turing machines, finite state machines, and prime numbers are <em>abstract objects</em>. </li>
</ul><p>Furthermore, based on the writings of James Moore, James Fetzer, Timothy Colburn, Donald MacKenzie (and others whom I have cited repeatedly in previous blog posts), at the very least, a conceptual distinction is in order between a "mathematical program" and a "computer program". Based on Turner's work, I can say today that the former belongs to the category of abstract objects while the latter belongs to the category of technical artefacts.</p>
<p>Now, coming to the blog post of Michael Hicks, I have two observations to make. First, he writes about:</p>
<blockquote><p>"full formal verification, the end result of which is a proof that the code will always behave as it should. Such an approach is being increasingly viewed as viable."</p>
</blockquote>
<p>So Hicks says we can have a mathematical proof about the behavior of an engineered system. Fetzer already made clear that this is not possible. The best we can do is prove a mathematical property about some mathematical model of a running system. And doing so can, indeed, be of great value in practice, as I experience daily. Hicks can only be right (and Fetzer is then wrong) if the mathematical object at hand and the engineered system under scrutiny belong to the same category.</p>
<p>Have I taken Hicks's words out of context? Am I wrong in linking Hicks's statement to that of <a href="/node/139">John Reynolds</a> and <a href="/node/138">Dr. X</a>? Judge for yourself. Links to statements made by Tony Hoare and other great minds in computer science have already been provided by Fetzer, Colburn, and <a href="https://mitpress.mit.edu/books/mechanizing-proof">MacKenzie</a>.</p>
<p>Regardless of whether you think Fetzer and I are wrong, at least we can all agree on the following historiographical observation: the best of the best in computer science have largely ignored the writings of philosophers of computer science. They do not mention Fetzer and the like. Perhaps they have some very good reasons not to do so (and perhaps they do not).</p>
<p> </p>
<p><strong>Soundness & Completeness</strong></p>
<p>My second point concerning Hicks's blog post is about the alleged practical implications of Rice's Theorem. This is where my own thoughts are put center stage, for the writings of Fetzer and the other aforementioned philosophers do not cover computability theory.</p>
<p>Hicks provides definitions of a "sound analysis" and a "complete analysis" — definitions that I will scrutinize later. Hicks then provides the following statement which I honestly have difficulty comprehending:</p>
<blockquote><p>Ideally, we would have an analysis that is both sound and complete [Yes], so that it reports all true bugs, and nothing else [No!].</p>
</blockquote>
<p>Even if we would have a <em>mathematical</em> analysis that is both sound and complete, then this mathematical accomplishment cannot <em>guarantee </em>something about the real world, including the behavior of the engineered <em>system</em> under scrutiny — unless, again, the mathematical object and the engineered system belong to the same category (and, <em>moreover, </em>we can specify absolutely everything about the engineered system in a concise and useful manner).</p>
<p>Is Hicks's reasoning flawed or am I missing something? This is a genuine question. In my words, a sound and complete analysis can provide engineers <em> extra confidence</em> that their system will behave appropriately in the real world.</p>
<p>Let me just mention at this point that Edsger Dijkstra, Aad van Wijngaarden, and several others would have mathematically modeled a computer program with a Turing-incomplete model of computation and, specifically, with a finite state machine. So I <em>also</em> struggle with Hick's suggestion to only use Turing-complete languages when mathematically modeling computer programming languages. I believe he makes that suggestion in the following passage:</p>
<blockquote><p>Unfortunately, such an [ideal] analysis [which is both sound and complete] is impossible for most properties of interest, such as whether a buffer is overrun (the root issue of Heartbleed). This impossibility is a consequence of Rice's theorem, which states that proving nontrivial properties of programs in Turing-complete languages is undecidable. So we will always be stuck dealing with either unsoundness or incompleteness.</p></blockquote>
<p>I definitely agree with Hicks that <span style="text-decoration: underline;"><em>if</em></span> we use Turing-complete languages, <span style="text-decoration: underline;"><em>then</em></span> we cannot ignore Rice's Theorem, "which states that proving nontrivial properties" of our <em>mathematical</em> programs (expressed in our Turing-complete language) "is undecidable." But most engineers I know don't have a preference for</p>
<ul><li>sticking to one modeling language only, nor do they </li>
</ul><ul><li>advocate a Turing-complete language <em>per se</em>. </li>
</ul><p>Contrary to programming language specialists, engineers don't want to attach <span><em>precisely one meaning</em> to each computer program and, likewise, to each computer programming language. A technical artefact can be mathematically modeled in more than one way. Each model has its pros and cons. I can model a computer with both a finite state machine <em>and</em> a linear bounded automaton. Likewise, I can model my C computer program in multiple, complementary ways; e.g., with a finite state machine, with primitive recursive functions, and with general recursive functions. </span><span>The richness lies in the multitude of ways to mathematically model reality. <br /></span></p>
<p>Do these relatively simple <em>philosophical</em> principles of programming languages have any bearing on Michael Hicks and his community of researchers?</p>
<p> </p>
<p><strong>Hicks's definitions</strong></p>
<p>The root of the confusion, I believe, lies in Hicks's defintions. Here is what Hicks says about a "sound analysis:"<strong><br /></strong></p>
<ul><li>A <em>sound</em> analysis is one that, if there exists an execution that manifests a bug at run-time, then the analysis will report the bug. </li>
</ul><p>I would like to emphasize that Hicks's "execution" is a mathematical object and, moreover, it is a mathematical object in Hicks's chosen model of computation. (Somebody else can choose <em>another</em> model of computation). Of course the word "execution" refers to a "real execution" but the reference is only indirect and the indirection can be made more explicit for the sake of conceptual clarity by distinguishing between "mathematical executions" and "real executions". The former belongs to the category of abstract objects while the latter belongs to the category of concrete physical objects.</p>
<p>Again, a sound analysis says something about the mathematical program and only <em>indirectly</em> something about the computer program under scrutiny. Furthermore, since someone else can choose another mathematical program to model the same computer program, it is misleading to suggest that there is a one-to-one mapping between the computer program and the chosen mathematical program.</p>
<p>Similar remarks can be made about Hicks's definition of a "complete analysis:"</p>
<blockquote><p>On the flip side, a <em>complete</em> analysis is one that, if it reports a bug, then that bug will surely manifest at run-time.</p>
</blockquote>
<p>There is a categorical distinction to be made between a mathematically modeled bug and a bug encountered during the execution of a real-world program. They are not the same thing.</p>
<p>Am I right to conclude that Hicks and other programming language specialists oftentimes think they are <em>directly</em> referring to <em>both</em> their mathematical model <em>and</em> the actual computer program?</p>
</div></div></div><section class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above view-mode-rss"><h2 class="field-label">Tags: </h2><ul class="field-items"><li class="field-item even" rel="dc:subject"><a href="/category-mistakes" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Category Mistakes</a></li></ul></section>Wed, 21 Sep 2016 10:56:11 +0000egdaylight140 at http://dijkstrascry.comhttp://dijkstrascry.com/node/140#commentsTechnical Artefacts and John Reynolds
http://dijkstrascry.com/node/139
<section class="field field-name-field-histdate field-type-text field-label-inline clearfix view-mode-rss"><h2 class="field-label">Dated: </h2><div class="field-items"><div class="field-item even">19 September 2016</div></div></section><div class="field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss"><div class="field-items"><div class="field-item even" property="content:encoded"> <p>Before discussing several <a href="/node/138">"category mistakes" in computer science</a> in follow-up posts, it is much preferred to first introduce a few categorical distinctions and definitions and to subsequently test these concepts on the writings of computer science's greatests (i.e., the writings of the people I and many others admire the most). I shall take a 2012 letter which John Reynolds wrote in connection with my book <a href="http://www.lonelyscholar.com/dawn"><em>The Dawn of Software Engineering</em></a> (and in which he provided an extensive amount of valuable feedback about the history of programming languages).</p>
<p>Specifically, and in line with the philosophy of technology, a distinction can be made between at least three separate categories:</p>
<ol><li>computers (including laptops and iPads) are <em>concrete physical objects</em>,</li>
<li>computer programs and also computer programming languages are <em>technical artefacts</em>, and</li>
<li>Turing machines, finite state machines, and prime numbers are <em>abstract objects</em>. </li>
</ol><p>Note that stating the existence of abstract objects does not commit one to believe in a platonic realm. I thank the philosopher of computer science Giuseppe Primiero for bringing this observation to my attention. Note moreover that I am not merely distinguishing between "concrete systems" and their "formal models" but between three separate categories.</p>
<p>Further clarifications are in order:</p>
<ul><li>A <em>computer program</em> refers to the class of programs in standard, often commercial, computer programming languages that can actually be compiled and run on a specific electronic device; i.e., a computer. </li>
<li>A <em>mathematical program</em> is a mathematical model — containing a formal syntax and a formal semantics — of a computer program.</li>
</ul><ul><li>A <em>technical artefact</em> is a physical structure with functional properties [1]. </li>
</ul><p>Computer programs and corresponding computer programming languages are technical artefacts in that they only fulfill their intended function because of their actual physical structure. The physical manifestations alone, however, are not technical artefacts [2,3]. Computer scientists build technical artefacts (e.g., computer programs, data types, and the like) which enable them to “reflect and reason about them independently of any physical manifestation” [3]. Reflecting and reasoning are often done by resorting to mathematics; that is, by using one or <em>more(!)</em> mathematical programs for <em>the</em> computer program under scrutiny.</p>
<p>Now, is it true that computer scientists oftentimes refer to a computer program while they are actually referring to a mathematical program? To be more precise: are they actually referring to a text-on-paper representation (called a <em>paper program</em>) or a text-on-screen representation (called a <em>screen program</em>) of the mathematical program under scrutiny, instead of the <em>computer program</em> itself (which is something that resides electronically in a computer)? Or am I just being pedantic and none of this really matters or all of this is well known?</p>
<p>When computer scientists refer to their mathematical model, is it true that they oftentimes incorrectly think that they are also <em>directly</em> referring to the actual computer program? Do they distinguish between different representations of the mathematical program, such as a paper program, a screen program, and a computer program? Each representation is, once again, a technical artefact and <em>not</em> a mathematical object.</p>
<p>I'd like to know whether Raymond Turner is the only computer scientist (or one of few computer scientists) who thinks along these lines [3] or whether the vast majority of, say, programming language specialists (a) knows all of the above and (b) doesn't consider it all too important. (I would be surprised if all of the above is common knowledge because the work of Turner and his colleagues, referenced below, has only been published recently.)</p>
<p>I'm also trying to find out whether the best of the best in academia really understand the categorical distinction between a "mathematical program" and a "computer program". So, as a first attempt, I take an excerpt from John Reynolds's 2012 letter. I had difficulty comprehending this part of Reynolds's letter because I had by then already studied Donald MacKenzie's 2004 book <a href="https://mitpress.mit.edu/books/mechanizing-proof"><em>Mechanizing Proof</em></a> in great detail. Here are Reynolds's words:</p>
<blockquote><div id="yiv5226709285yui_3_16_0_1_1474218282052_6118" dir="ltr">As an example, it is likely that, in perhaps ten years time, when you download a program from the Internet, you will also download a formal proof that the program will respect safety conditions (e.g. no buffer overflow or dereferencing of pointers into the wilderness) that will ensure that it cannot disrupt the behavior of other programs running simultaneously. And your computer will check these proofs before running the program.</div>
</blockquote>
<div id="yiv5226709285yui_3_16_0_1_1474218282052_6136" dir="ltr">Later on Reynolds says that when this notion of "proof-carrying code" is "complete and widely used, logic will have won a major victory."</div>
<div dir="ltr">Now, here's my humble attempt to add adjectives and nouns, such as "computer" and "mathematical," with the sole purpose of trying to truly comprehend Reynolds's message with Turner's philosophical glasses:</div>
<blockquote><div id="yiv5226709285yui_3_16_0_1_1474218282052_6065" dir="ltr">As an example, it is likely that, in perhaps ten years time, when you download a COMPUTER program from the Internet, you will also download a REPRESENTATION of a formal proof that the MATHEMATICAL program (which is *a* MODEL of the COMPUTER program under scrutiny) will respect safety conditions (e.g. no buffer overflow or dereferencing of pointers into the wilderness). This will NOT ensure but will definitely increase our CONFIDENCE that the COMPUTER program cannot disrupt the behavior of other COMPUTER programs running simultaneously. And your computer will check REPRESENTATIONS of these proofs before running the COMPUTER program.</div>
</blockquote>
<div id="yiv5226709285yui_3_16_0_1_1474218282052_6340" dir="ltr"><span id="yiv5226709285yui_3_16_0_1_1474218282052_6339">If, by any chance, you are receptive to the clarifications put forth by MacKenzie, <a href="https://www.amazon.com/Philosophy-Computer-Science-Bureaucarcies-Administration/dp/156324991X">Timothy Colburn</a> and others, then you will not object to my re-phrasing of Reynolds's excerpt. Conceptual clarity is what software scholars are after and I take gentle issue with some programming language experts who consider all of the above to be trivial and irrelevant. Okay, one could argue that Reynolds has crafted his sentences with brevity in mind. But in my profession, which is safety engineering, one will have to substitute the following words used by Reynolds in order to be taken seriously:</span></div>
<div dir="ltr">
<ol><li><span>"will respect safety conditions" --> "shall respect safety conditions"</span></li>
<li><span>"ensure" --> "increase our confidence"</span></li>
</ol></div>
<div dir="ltr"><span>In fact, today I was corrected by a professional safety engineer for writing, just like Reynolds, "will respect" instead of "shall respect". The rationale for this seemingly small correction is that we, formal methodists, do <em>not</em> have any guarantee that the safety conditions <em>will</em> be met at runtime by an engineered <em>system. </em>Likewise, Reynolds's verb "ensure" is also frowned upon in the large world of software engineers. This has nothing to do with brevity. In retrospect, I find it extremely ironic that non-formal-methods people have to persuade formal-methods advocates, like Reynolds and myself, to be more precise in the way we formulate our mathematical findings.</span></div>
<div dir="ltr"><span><br /></span></div>
<div dir="ltr"><span>I conclude with three take-away messages. First, I don't think I'm being overly concerned with minute details. Wait till you meet a safety engineer who is responsible for the software in an airplane. Second, I therefore think formal methodists should take all of the above seriously if they want to be respected by software engineers. Third, and this is a point I will come back to in another post, many programming language specialists want to attach <em>precisely one meaning</em> to each computer programming language. But from an engineering perspective that's not the right thing to do. Just like a computer can have different, complementary!, models of computation, a computer programming language can be modeled in different (complementary!) ways. The richness lies in the multitude of models. </span><span>People like </span><span id="yiv5226709285yui_3_16_0_1_1474218282052_6339">Edsger Dijkstra and Christopher Strachey each attached a different semantic model to the same programming language and similar things are happening today around the world with regard to C and other industrial programming languages. Like it or not, C is informal, it is <em>made</em> (by humans), and we can mathematically model it in <em>many</em> ways. </span></div>
<p>This post served to illustrate that categorical distinctions are in order. I hope to eventually convince the reader that they should be made for the sake of clarity, for the sake of making sense to both outsiders <em>and</em> ourselves.</p>
<p> </p>
<p><strong>References</strong></p>
<p>[1] P. Kroes. <em>Engineering and the dual nature of technical artefacts</em>. Cambridge Journal of Economics, 34:51–62, 2010.</p>
<p>[2] N. Irmak. <em>Software is an abstract artifact</em>. Grazer Philosophische Studien, 86:55–72, 2012.</p>
<p>[3] R. Turner. <em>Programming languages as technical artefacts</em>. Philosophy and Technology, 27(3):377–397, 2014. First online: 13 February 2013.</p>
</div></div></div><section class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above view-mode-rss"><h2 class="field-label">Tags: </h2><ul class="field-items"><li class="field-item even" rel="dc:subject"><a href="/category-mistakes" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Category Mistakes</a></li></ul></section>Mon, 19 Sep 2016 13:15:34 +0000egdaylight139 at http://dijkstrascry.comhttp://dijkstrascry.com/node/139#commentsCategory Mistakes in Computer Science
http://dijkstrascry.com/CategoryMistakes
<section class="field field-name-field-histdate field-type-text field-label-inline clearfix view-mode-rss"><h2 class="field-label">Dated: </h2><div class="field-items"><div class="field-item even">15 September 2016</div></div></section><div class="field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss"><div class="field-items"><div class="field-item even" property="content:encoded"> <p>In 2006 I defended my Ph.D. thesis at KU Leuven. Dr. X from abroad was in my defense committee. He had taken the liberty a few days earlier to share with me some of the "fundamental limitations" he had found with regard to transformational systems, such as the system I had designed, implemented, and documented for my Ph.D. defense. (A comprehensive overview of my system later appeared in <em>Science of Computer Programming</em> [1]). Dr. X had already published his theoretical insights and he wanted me to incorporate his findings into my Ph.D. dissertation.</p>
<p>I had to follow a Master of Logic program at the University of Amsterdam (2007-2009) to actually start understanding each and every argument in Dr. X's theoretical paper. My incentive to do so was because I intuitively felt that Dr. X's reasoning was flawed. In the process, and due to my strong interest in the history of ideas, I started to see that several computer scientists, including the most prominent ones, had been making and were continuing to make "category mistakes" like Dr. X.</p>
<p>Very briefly: Dr. X's point was that he had mathematically <em>proved</em> that certain <em>systems</em> cannot be <em>engineered</em>. That is, mathematics and computability theory in particular prevail over engineering and all engineers better start learning computability theory. I believe I am today in a position to improve the wordings of Dr. X with regard to his own article. I would say that:</p>
<blockquote><p>If I model my transformation system with his Turing-machine formalism, then I obtain a classical undecidability result from computability theory which gives me insights into the industrial problem that I and fellow engineers are trying to solve.</p>
</blockquote>
<p>The undecidability result does <em>not</em> tell me that specific systems cannot be built. That being said, a good engineer could or perhaps should (with an emphasis on "perhaps") take the undecidability result into account when engineering systems.</p>
<p>Dr. X's view on undecidability and its alleged implications on practice are also strongly advocated by David Harel, as exemplified in some of his books and in his <a href="http://www.win.tue.nl/dharel/doku.php?id=from_turing_to_harel#standing_on_the_shoulders_of_a_giantone_person_s_experience_of_turing_s_impact">2013 talk at TU/e</a>, which I attended.</p>
<p>While studying the history of ideas in computer science, I also started to notice that philosophers like James Moore, James Fetzer, and Timothy Colburn, along with the sociologist Donald MacKenzie, had made similar observations years and even decades before me. But computer scientists have mostly ignored these writings, or, as in the case of Fetzer, have ridiculed him. I am of course referring to Fetzer's 1988 article in the Communications of the ACM, entitled: <em>Program Verification: The Very Idea</em> [2]. The story is told in MacKenzie's 2004 book <a href="https://mitpress.mit.edu/books/mechanizing-proof"><em>Mechanizing Proof</em></a> and it should in my opinion be mandatory reading material for students and academics alike.</p>
<p>I, too, was a hard-core formal methodist during my Ph.D. years. Interestingly, none of my peers ever told me that the papers I was reading (i.e., the papers of Edsger Dijkstra, Tony Hoare, and others) were flawed in some, yet crucial, respects — as Fetzer already remarked in 1988. My educated guess is that even top computer scientists are not well aware about the history of their own discipline.</p>
<p>To be even more provocative, chances are that you are a computer scientist who thinks that everybody can and will give the same answer to the following simple, yet fundamental, question:</p>
<blockquote><p>What is a program?</p>
</blockquote>
<p>But wait a minute. Dijkstra says in 1973 that a program is a mathematical object of finite capacity while Christropher Strachey says, in that very same year, that a program is a mathematical object of infinite size. How can that be? Perhaps computer science is not so clear-cut after all. Moreover, Peter Naur insisted that a program is a model of the real world, which is <em>not</em> a logico-mathematical construction. So here we already have three conflicting views on computer science's most basic question.</p>
<p>Subsequent blog posts will shed more light on category mistakes made in computer science, <em>what</em> they are, and which writings of prominent computer scientists I am referring to. In this regard let me also mention the following writings:</p>
<ul><li>E.G. Daylight. <em>Category Mistakes in Computer Science at Large</em>. Peer reviewed by anonymous referees from POPL in 2015, by the CACM in 2016, and in revised form by POPL in 2016. The POPL referees consider the contents trivial and well known (but see my subsequent blog posts!) while the CACM reviewers fundamentally disagree with my research findings. As a result, I re-wrote a small part of that paper with the philosopher of computer science Giuseppe Primiero, resulting in
<ul><li><em>Category Mistakes in Computer Science</em>. Currently under peer review for the CACM.</li>
</ul></li>
<li>Subsequent blog posts will show that programming language experts do at times fuse categories and that conceptual clarity is much in order.</li>
<li>Moreover, I am currently re-writing the whole text in collaboration with Maarten Bullynck and Liesbeth De Mol, i.e., fellow historians and philosophers of science & technology. Dissemination in a reputable journal of philosophy is feasible but our grand challenge is to successfully reach out to computer scientists. </li>
<li>Finally, my findings <em>will also appear in 2017</em> (potentially as part of my forthcoming dissertation on the history & philosophy of computer science).</li>
</ul><p> </p>
<p><strong>References</strong></p>
<p><span><span class="this-person">[1] E.G. Daylight</span></span>, A. Vandecappelle, F. Catthoor. <span class="title"><em>The formalism underlying EASYMAP</em>. Science of Computer Programming, 72(3):71-135, 2008.</span></p>
<p>[2]<span><span class="this-person"> J.H. Fetzer</span></span>. <span class="title"><em>Program Verification: The Very Idea</em>.</span> <a href="http://dblp.uni-trier.de/db/journals/cacm/cacm31.html#Fetzer88"><span><span>Commun. ACM</span></span> <span><span>31</span></span>(<span><span>9</span></span>)</a>: <span>1048-1063</span>, <span>1988</span>.</p>
</div></div></div><section class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above view-mode-rss"><h2 class="field-label">Tags: </h2><ul class="field-items"><li class="field-item even" rel="dc:subject"><a href="/category-mistakes" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Category Mistakes</a></li></ul></section>Thu, 15 Sep 2016 07:04:04 +0000egdaylight138 at http://dijkstrascry.comhttp://dijkstrascry.com/CategoryMistakes#commentsWittgenstein — cars — Turing
http://dijkstrascry.com/WittgensteinCarsTuring
<section class="field field-name-field-histdate field-type-text field-label-inline clearfix view-mode-rss"><h2 class="field-label">Dated: </h2><div class="field-items"><div class="field-item even">1 September 2016</div></div></section><div class="field field-name-body field-type-text-with-summary field-label-hidden view-mode-rss"><div class="field-items"><div class="field-item even" property="content:encoded"> <p>Researchers have the responsibility of making clear the limits of their understanding about technology, including the software that is soon to be deployed in self-driving cars. Just like most people do not want conventional cars with drunken drivers in the vicinity of their beloved ones, I shall give arguments (which complement my previous arguments: <a href="/node/133">here</a> and follow-up <a href="/node/134">here</a>) to eschew self-driving cars as well.</p>
<p>My blog posts on self-driving cars constitute an article that I have written and submitted to journal X for peer review. Fortunately, journal X has granted me the right to keep these posts on-line.</p>
<p>In my opinion, self-driving cars should be tested in well-delimited areas for the rest of the present century, before they are “let loose” on our roads. Unfortunately, given the extremely large investments in technology for fully-autonomous vehicles, it seems that self-driving cars are here to come and stay. If so, then I foresee the following publication in a few decades:</p>
<blockquote><p>During the early years of research in self-driving cars, considerable progress created among many the strong feeling that a working system was just around the corner. The illusion was created by the fact that a large number of problems were rather readily solved. It was not sufficiently realized that the gap between such output and safety-critical real-time software proper was still enormous, and that the problems solved until then were indeed many but just the simplest ones whereas the “few” remaining problems were the harder ones—very hard indeed.</p>
</blockquote>
<p>This passage is my slight modification of what Yehoshua Bar-Hillel wrote in his famous 1960 report `The Present Status of Automatic Translation of Languages' in which he retrospectively scrutinized the field of machine translation [1]. Bar-Hillel's original words also appear in the following philosophical writings of Hubert Dreyfus: <em>What Computers Can't Do</em> [3] and <em>What Computers Still Can't Do</em> [4], i.e., books that changed the course of 20th-century research in Artificial Intelligence (AI). As Dreyfus explains, Marvin Minsky, John McCarthy, and other AI researchers implicitly took for granted that common knowledge can be formalized in facts. Following Plato, Gottfried Leibniz, and Alan Turing, many computer scientists blindly assume that a technique must exist for converting any practical activity, such as learning a language or driving a car, into a set of instructions [4, p.74]. Ludwig Wittgenstein, by contrast, opposed a rational metaphysical view to our (now digital) world. In explaining our actions, Wittgenstein would say, we must always</p>
<blockquote><p>sooner or later fall back on our everyday practices and simply say `this is what we do' or `that's what it is to be a human being' [4, p.56-57].</p>
</blockquote>
<p>Wittgenstein and computer scientists like Peter Naur and Michael Jackson would object to the supposition that human behavior can be perfectly replaced by man-made technology [2, 6]. They would disagree with a number of prominent people who have recently raised concerns that AI systems have the potential to demonstrate superintelligence. While a human will circumvent a big rock but not a crumbled-up piece of newspaper lying on the road, a self-driving car will try to drive around both [5]. Observations like these clarify why HAL, the superintelligent computer in <em>2001: A Space Odyssey</em>, remains science fiction. Why would mankind now all of a sudden be able to develop a HAL that drives on our cities' roads, circumventing pedestrians, bicyclists, and anything else that humans throw at it [7]?</p>
<div>People in the automotive industry who have patience with my philosophical reflections often end up falling into Dreyfus's trap by saying the following:</div>
<blockquote><div>Just add a new rule to the car's software to circumvent a "corner case." Keep doing this for each corner case.</div>
</blockquote>
<div>An example of a corner case is the crumbled-up piece of newspaper. If we want the car to drive over the newspaper, we need to add extra sensors and functionality to the vehicle <em>and add a new rule</em> to the vehicle's controller. (The extra hardware makes the car more expensive and the additional rule makes the software more complex.)</div>
<div>*</div>
<div>The problem with this "add a new rule" solution is that the size of the software grows linearly with each rule. Instead, software should be a model of the real world which is much smaller than the real world itself. That is, we want software to be a compressed representation of all the things it entails, not an explicit list of rules.</div>
<div>*</div>
<div>I am willing to change my position on self-driving cars if good arguments are brought forward, preferably by philosophically-inclined computer scientists. What worries me is that:</div>
<div id="yiv0459315165yui_3_16_0_ym19_1_1472635566219_34575">
<ul><li>Self-driving cars are already being deployed in the streets in multiple states in the USA (and I think the public has the right to be well informed about what is really going on).</li>
</ul></div>
<div id="yiv0459315165yui_3_16_0_ym19_1_1472635566219_33984" dir="ltr">
<ul><li>I have yet to come across a paper written by a self-driving-car advocate who addresses the philosophical issues raised by Dreyfus. If you, the reader, think that Dreyfus's arguments are easily dealt with (in connection with safety-critical software, not Internet apps), then please consider writing a paper on this matter or reply to this blog post.</li>
</ul></div>
<div dir="ltr">I am aware that machine learning is part and parcel of self-driving car technology and I would love to learn more about this. But I have also been told by more than one professional that "good old fashioned" rule-based learning is part of vehicle technology too.</div>
<div dir="ltr">*</div>
<div dir="ltr">REFERENCES</div>
<div dir="ltr">*</div>
<div dir="ltr">[1] Y. Bar-Hillel. The present status of automatic translation of languages. In F.C. Alt, editor, <em>Advances in Computers</em>, volume 1, pages 91-141. Academic Press, New York and London, 1960.</div>
<div dir="ltr">[2] E.G. Daylight. <a href="http://www.lonelyscholar.com/node/7"><em>Pluralism in Software Engineering: Turing Award Winner Peter Naur Explains</em></a>. Lonely Scholar, Heverlee, October 2011.</div>
<div dir="ltr">[3] H.L. Dreyfus. <em>What Computers Can't Do: The Limits of Articial Intelligence</em>. Harper/Colophon, New York, 1979. Revised edition (the first edition appeared in 1972).</div>
<div dir="ltr">[4] H.L. Dreyfus. <a href="https://mitpress.mit.edu/books/what-computers-still-cant-do"><em>What Computers Still Can't Do: A Critique of Articial Intelligence</em></a>. MIT Press, 1992.</div>
<div dir="ltr">[5] L. Gomes. Driving in circles: <a href="http://www.slate.com/articles/technology/technology/2014/10/google_self_driving_car_it_may_never_actually_happen.html">The autonomous Google car may never actually happen</a>. <em><a href="http://www.slate.com">www.slate.com</a></em>. Accessed on 2016-Aug-17.</div>
<div dir="ltr">[6] M.A. Jackson and E.G. Daylight. <a href="http://www.lonelyscholar.com/jackson"><em>Formalism and Intuition in Software Development</em></a>. Lonely Scholar, Geel, August 2015.</div>
<div dir="ltr">[7] M. Konczal. The phenomenology of Google's self-driving cars. <em><a href="http://rooseveltinstitute.org/phenomenology-googles-self-driving-cars/">http://rooseveltinstitute.org/phenomenology-googles-self-driving-cars/</a></em>, October 2014. Accessed on 2016-Aug-17.</div>
</div></div></div><section class="field field-name-field-tags field-type-taxonomy-term-reference field-label-above view-mode-rss"><h2 class="field-label">Tags: </h2><ul class="field-items"><li class="field-item even" rel="dc:subject"><a href="/taxonomy/term/17" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Self-Driving Cars</a></li></ul></section>Thu, 01 Sep 2016 18:25:50 +0000egdaylight136 at http://dijkstrascry.comhttp://dijkstrascry.com/WittgensteinCarsTuring#comments