Submitted by egdaylight on

## Dated:

Concerning Cantor's diagonal argument in connection with the natural and the real numbers, Georg Cantor essentially said: assume we have a bijection between the natural numbers (on the one hand) and the real numbers (on the other hand), we shall now derive a contradiction ... Cantor did not (concretely) enumerate through the natural numbers and the real numbers in some kind of step-by-step fashion, i.e., 'constructively' or 'operationally' as some would say today. Instead, the purported correspondence — i.e., the completed enumeration — is hypothetically laid out in front of us in the Platonic realm. (These aren't Cantor's words nor is the sequel historical. But to understand the history of mathematics and computer science, attempting to reason like a mathematician seems like an awfully good idea.)

Yet, in some niches of computer science there is a tendency to explain (e.g., informally to students) Cantor's argument in a stage-by-stage fashion, eschewing Platonism. Each stage contains a finite amount of information, and each next stage is, content-wise, some finite extension of the previous stage. Let me thus play the advocate of this particular kind of computer scientist — i.e., someone who embraces potential infinity and eschews actual infinity, also known as *a potentialist *— and see where I go astray.

As a potentialist I want to concoct a stage-by-stage version of Cantor's diagonal argument; that is, I will only embrace potential infinity and stress that one can (concretely) enumerate through the natural numbers and one can enumerate through ever-increasing approximations of all the real numbers in some dovetailing fashion. The idea is that each approximation (of a real number) at stage *i* grows in length as the potentialist transitions to stage *i+1*; furthermore, the number of approximations increases as well. As advocate of the potentialist, and with some handwaving, I then stipulate that *'in the limit,'* we *shall* obtain an infinitely long enumeration of entries, with each entry being infinitely long.

There *seem* to be at least two problems with this reasoning. First, if the potentialist strictly reasons *in the limit*, then he is never reaching infinite precision (for each of his real numbers). What he actually means to say is, that, *at the limit* we *have obtained* the desired (infinite precision) result. But, then he is embracing actual infinity after all! I opine that more than a handful of lecturers in computer science conflate the "in the limit" with "at the limit," falsely believing that the whole argument can be captured with potential infinity.

But am I right to say this? For, here is a rebuttal which I received from a colleague: "I cannot agree: one defines an infinite process, but this is done piecewise and cannot be regarded as a recourse to actual infinity." Indeed, the potentialist defines a process piecewise (that is, stage-by-stage), embracing potential infinity and (indeed) making no appeal to actual infinity. My colleague then uses *streams* to build up a process which, stage-by-stage, generates a diagonal *d* at each stage, such that *d* is an initial segment of a longer diagonal *d'* at a later stage. And so on. My colleague continues: "it should be possible to build the diagonal as a growing totality." This is all fine with me. But then he says: "If I am right, there is no need to assume that the enumeration must exist as a completed infinite sequence in order to apply the diagonal argument." So this seems to suggest that the potentialist can obtain Cantor's result about infinite-precision numbers while remaining in finite territory at all times.

I don't know what to make of this. My hunch is that streams are semantically defined with actual infinity (and my colleague has now confirmed this). And if one wants to entertain Brouwerian-like semantics, then see below for my comment on Brouwer. At any rate, some lecturers will accept my lines of argumentation presented so far and they will then clarify that they are (indeed) reasoning operationally *and with* an actual infinity in play. So now we don't have a potentialist any more but an actualist (like Cantor) who reasons in a step-by-step fashion (unlike Cantor) — let's call this particular kind of computer scientist an *operational actualist*.

The intellectual position of the operational actualist brings me to the second problem: How does he constructively transition from stage *i* to stage *i+1*? Isn't it a key feature of computer science, not to mention Cantor and the foundations of mathematics, that there is no recipe to achieve such a transition, such that, if we are ever to come to the end of the process, *all* real numbers will have been enumerated; that is, they then stand in a *bijection* to the naturals? The answer is affirmative if the lecturer abides by what he is about to teach the next week — cf. the theory of Turing machines. (I guess the answer could potentially be negative if some kind of an appeal is made to Brouwer's intuitionism; however, it should be noted that Brouwer was not an operational actualist to begin with. For Brouwer, a real number was *not* an infinite sequence of digits. Fully embracing Brouwer's position will thus not work out here either. With that said, his position leads to a totally different mathematical landscape, which I take to be equally if not more interesting than the contents of the present blog post.)

**Begin of Intermezzo. **But what if we entertain other forms of stage-by-stage diagonalisation, *just for the sake of argument*? So far, I've assumed that each stage *i *is to contain an *i x i* Cantorian table, with each row depicting an *i-*digit approximation of a real number. That is, each stage contains a square, and each later stage contains a greater square. But what if we work with rectangles instead of squares? To do so, let us simplify matters and work with binary notation on all real numbers *inside* the interval *(0,1)*. Each binary approximation (wich is a finite string of bits) has to be extended from one stage of reasoning to the next. It is perfectly possible (i.e., algorithmically) to list all choice sequences of length *n* at stage *n* and then list all choice sequences of length *n+1* at stage *n+1. *The number of choice sequences at stage *n+1* is double the number of choice sequences at stage *n.*

The diagonal I obtain (in my specific scheme at home, not expounded here) is *0 *at stage 1, is *00* at stage *2*, is *000* at stage *3 *and, in general, is a string of zeroes of length *n* at stage *n. *The anti-diagonal, then, is a string of ones of length *n* at stage *n*. In the limit, I will then obtain an infinitely long string of only ones, which (since we're working with binary notation) would represent the number *1* and, alas, *not* some real number *inside* the interval *(0,1)*.

The crux is that only a small fraction of the total number of choice sequences are used (at any stage in the process) to actually obtain the diagonal (at that stage): at stage *n*, each choice sequence is *n* bits long and, hence, the diagonal is *n *bits long, but there are *2^n* choice sequences available and *2^n - n *of them are *not* consulted to concoct the diagonal bit string at that stage. **End of Intermezzo.**

To recapitulate, a Cantorian-like diagonal argument does not work if, pace the potentialist, we are only allowed to reason operationally; that is, in stages, with each stage containing a finite amount of information. Nor does it *seem *to work if, pace the operational actualist, we reason both operationally (i.e., with potential infinity) and at infinity (i.e., with actual infinity). To claim that the latter simply *cannot *work would require an appeal to some tenet or hypothesis. Why? Because in the next intermezzo I, as an operational actualist, provide a method to get the job done after all, although I then hastily admit that my solution seems unreasonable on intuitive grounds.

**Begin of Intermezzo. **Instead of working with rectangles (as in the previous intermezzo) I now work with triangular shapes. That is, each stage *i *is initially filled up with *2^i* rows, each depicting an *i-*digit approximation of a real number, but then the lower rows are extended with more bits (in conformity with later stages of the entire stage-by-stage process) so that the concocted diagonal at stage *i *(which has a length of *2^i *bits) is based on *all* rows that constitute the stage at hand; that is, *each binary choice sequence contributes to the making of the diagonal*. Specifically, we have:

- Stage
*1*contains all (two) choice sequences of one bit, but the second choice sequence in the table is appended with one bit — in conformity with the later stages in the entire process. - Stage
*2*contains all (four) binary choice sequences of length two, but the third choice sequence in the table is appended with one bit and the fourth choice sequence is appended with two bits — in conformity with the later stages in the entire process. At stage*2*the diagonal*d*is built up from the first bit of the first choice sequence, the second bit of the second choice sequence, the third bit of the third sequence, and the fourth bit of the fourth sequence. - At stage
*3*, each choice sequence is at least three bits long: the first three sequences are precisely three bits long, the fourth sequence is four bits long, the fifth sequence is five bits long, the sixth sequence is six bits long, etc.

On the one hand, these stages converge *at the limit* to a list of *all* real numbers (represented in binary notation) in the interval *(0,1)*. On the other hand, as *i* goes to infinity, the transition from stage *i* to stage *i+1* takes longer. Indeed, we have exponential growth: the length of the diagonal doubles from one stage to the next. While all transitions are operational, most are not operationally feasible (in some intuitive sense).

In sum, as an operational actualist I have just described a process that, at the limit, obtains the desired bijection between the naturals and the reals. However, my "process" can hardly be called feasible for it slows down, becoming half as slow at stage *i+1* compared to stage *i* (for every *i*). This implies, to the best of my current understanding, that my "process" will never reach its final objective: a bijection between the naturals and the reals.

But, as of today, I'm still in doubt as to whether my reasoning is correct. Let's recapitulate. By definition, the operational actualist is an actualist; that is, someone who is allowed to rely on a completed infinity in his reasoning. He is thus permitted to recast an infinitely long computation as one mathematical object. (Recall: that's what sets him apart from the potentialist to begin with.) Hence, item 1. below is no obstacle for the operational actualist.

- there are infinitely many stage-to-stage transitions in my "process"
- there is a jump from
*'in the limit'*to*'at the limit'*reasoning

In contrast to the actualist (e.g., Cantor who just lays everything out in front of us, hypothetically), the operational actualist has the obligation to show that a jump can be made from the finite territory of the potentialist to the bijection of the actualist; that is, from intensional to extensional reasoning. Whence my item 2. Alas it is not at all clear what that *jump* entails. For starters, if the jump had constant "cost" (whatever that means), then the operational actualist would be in business: my "procedure" would work out just fine for him if, say, my completed totality was a finite totality (for then the "cost" is constant). In the present case study, however, a jump is to be made from finite territory to Cantor's bijection. The cost is not linear but exponential when we stick to *in the limit* reasoning. Perhaps this observation captures the discrepancy between my "procedure" (intensional reasoning) and, after the jump is made, Cantor's bijection (extensional reasoning). The latter does not follow naturally from the former due to the exponential cost function. (This sort of makes sense.)

**In retrospect, if I can only resort to one Turing machine tape (or a fixed, finite number of such tapes), then it is ****simply wrong of me to claim that my process gets the job done (in exponential time). For, the only thing it provides, at the limit, is an infinitely long sequence of bits, not a table of rows (with each row depicting an infinite-precision real). The outcome is no clear-cut bijection but a crafty encoding of the bijection in one dimension**. However, if my process can resort to infinitely many Turing machine tapes (one tap per row), then it seems like I'm back in business.

With that said, my process merely needs to return the anti-diagonal of the bijection (at the limit), not the bijection itself. [Uhum.] **End of Intermezzo.**

It thus seems that Cantor's argument can only make sense if all the (infinite precision) numbers are hypothetically laid out in front of us from the very beginning. It thus seems that computer science theory (which talks about computable real numbers *and *uncomputable real numbers) hinges on actual infinity and Platonism, despite the preference for potential infinity and step-by-step reasoning. Or is it, in contrast to what the history of mathematics seems to be telling, feasible to combine both kinds of reasoning in some new way? (Wouldn't that lead to a contradiction? Doesn't actual infinity pertain to a human-independent reality of mathematics and potential infinity to a human activity of *doing* mathematics?) Can a Turing machine defined with potential infinity access an oracle number which is actually infinite in length? (I don't think so. How can that even make sense ontologically?) Or* must* a TM tape be defined with actual infinity? (I think so.)

My contention then — which is trivial to set theorists — is, that, Cantor's diagonal argument is more akin to a Platonic thought experiment (cf. actual infinity & proof by contradiction) than to an operational device (cf. stage-by-stage reasoning on the basis of some law or recipe) in the vicinity of — let alone inside — an empirical world. (But isn't the difficulty for unrestricted Platonism that it leads to contradictions? What, then, *is* computer science? I have argued elsewhere that it is based on a conflation.)

Can a similar assessment be made of Turing's 1936 paper? Did Turing share Cantor's Platonism or was he more like a present-day computer scientist; that is, a potentialist or an operational actualist? Perhaps Turing's outlook was something else altogether? (In my reading Turing did not categorically distinguish between an abstract static world of universals and the concrete changing world of individuals, and this was due to his philosophy ...) Fascinating topic, to be continued.

**[Last update: 16 September 2022. ****Since the topic of diagonal arguments is abstruse, this post will most likely be updated, extended, refined and corrected repeatedly. I thank J.R. for bringing the topic of this post to my attention & for commenting on my many drafts.]**