# Turing's Landmark Paper of 1936

Turing's paper of 1936 has the rather forbidding title "On Computable Numbers, with an Application to the Entscheidungsproblem"; to help understand this title and the context in which it was written, see the page on Hilbert, Gödel, and Turing. Here our focus will be on Turing's invention of the concept of a computing machine – what we now call a "Turing Machine".

## The Problem of Decidability

Hilbert's famous "Decision problem" ("Entscheidungsproblem" in German) was to establish whether it is in principle possible to find an *effectively computable* decision procedure which can infallibly, and in a finite time, reveal whether or not any given proposition P is provable from a given set of axioms and rules (in particular, a set of axioms and rules designed to capture arithmetical truths). An "effectively computable" procedure is supposed to be one that can be performed by systematic application of clearly specified rules, without requiring any inspirational leaps or spontaneous intellectual insights. So if such a decision procedure could be discovered, this would mean that the provability (or otherwise) of any arithmetical proposition could be determined mechanically, by an appropriate computing machine.

For rigorous consideration of this issue, however, vague notions of "effective computability" are not enough – how *exactly* are we to specify what counts as a "systematic" or "mechanical" procedure, as opposed to one that is inspirational or spontaneous? Turing's answer was to design a kind of machine with a very precise and limited repertoire of operations, but which is capable – if suitably instructed – of carrying out a vast range of possible computations.

## The Turing Machine

Briefly, a Turing Machine consists of a long tape divided into squares, onto which symbols can be written and later erased, together with a read/write head (as in a tape recorder) which can write (or erase) symbols and also read them. Such reading and writing can only take place on the particular square where the read/write head is placed, but the head is capable of moving – one square at a time – in either direction. So by moving repeatedly "left" or "right", any square on the tape can eventually be reached if required. (But note that the tape is potentially infinite in length, or if that seems too fanciful, think of new tape being created whenever necessary so that the head never runs out of space.) Each square can contain at most one symbol, so writing a new symbol on a square automatically erases any previous symbol.

In order to "remember what it is doing", the Turing Machine has a very limited memory in the form of a "state", which can take any of a specified – and finite – range of values (e.g. "b", "c" or "d"). One of these is the *beginning* state, from which computation starts (so Turing tends to use letter "b" for that). Computation then proceeds in the following way:

1. | Read symbol from current square |

2. | Depending on the symbol read, and the current state: (a) Erase the symbol, or write a new one; (b) Move left or right on the tape; (c) Change to a new state. |

Stage 2 here is governed by a "machine table" which states what operations (a), (b) and (c) are to be carried out for each combination of symbol read and current state. For example a simple machine table that counts binary integers is as follows (Petzold, p. 99)

state |
read |
write |
move |
new state |

b | blank | 0 | - | i |

i | 0 | 1 | - | r |

i | 1 | 0 | left | i |

i | blank | 1 | - | r |

r | 0 | - | right | r |

r | 1 | - | right | r |

r | blank | - | left | i |

This seems an extremely limited repertoire of operations, but amazingly, it proves sufficient for calculating *anything that we know how to calculate*! By designing an appropriate machine table, and a way of encoding any relevant data into symbols on the tape, a Turing Machine can be made to calculate anything that is calculable by even a modern computer armed with a sophisticated programming language. This gives Turing Machines great theoretical interest, because if something can be proved to be impossible for any Turing Machine (by exploiting its very limited range of operations), then we can conclude that it would also be impossible for any other computer.

## The Halting Problem

Having defined his notion of a computing machine, Turing showed that there exist problems – notably the famous "Halting Problem" for Turing Machines – that cannot be effectively computed by this means. That is, he proved the impossibility of devising a Turing Machine program that can determine infallibly (and within a finite time) whether or not a given Turing Machine T will eventually halt given some arbitrary input I (where the input I consists of the symbols initially written on the tape, before the Turing Machine begins operating).

The unsolvability of the Halting Problem is easier to show with a modern programming language, though the principles are the same. Suppose, then, that we have a program P which takes two inputs, one of which is the text of a program T, and the other the list of inputs to that program I, and what program P does is to tell us whether the program T will halt (i.e. will avoid going into an infinite loop and so will eventually stop running) when it is given input I. Thus program P itself will always terminate, giving either the output "Yes" (if T will halt on input I) or "No" (if T will not halt on input I). We now proceed to prove that such a program P is an impossibility, because if it existed, it would generate an absurdity.

Program P processes its inputs T and I, calculates whether program T will halt given input I, and then having done this calculation outputs either "Yes" or "No" before terminating. What we now do is to edit P to create a slightly different program Q, by replacing both the input routine and also the instructions that follow the calculation. First, we fix program Q to take only a single input X, treating this as playing the roles of *both* T and I. Secondly, at the point where program P says "output('Yes')" (or whatever the appropriate instruction is in the programming language being used), program Q says "repeat; output('Loop!'); until 0=1". This instructs the program to output the word "Loop", and to continue doing so until 0 equals 1 (which, of course, never happens, so the loop continues forever).

The question we can now ask is: *Does program Q halt when given the text of program Q as input?* We will find that no consistent answer can be given. Suppose on the one hand that Q would halt given Q as input. Then program P, given the inputs Q and Q, would end with the instruction "output('Yes')". But in program Q, we have replaced that instruction with an ever-repeating loop. Hence it follows that Q would not after all halt given Q as input, which contradicts the assumption we are making. Suppose on the other hand that Q would not halt given Q as input. Then program P, given the inputs Q and Q, would halt with the instruction "output('No')"; but in this respect program Q is unchanged from P, so it too would halt given Q as input, and again we have a contradiction. Combining the two sides of this dilemma, we have that Q would halt given Q as input *if and only if* Q would not halt given Q as input, and that is an outright contradiction. So program Q is an impossibility. But program Q was derived from program P by very straightforward editing. It follows that program P must also be an impossibility. Therefore it is not possible to write a program which infallibly tells us whether any program T will halt given input I: the Halting Problem is unsolvable!

## Turing's Achievement

It is remarkable that in 1936 – many years before any general-purpose computer would become practically feasible – Alan Turing was able to devise such a powerful yet simple model of what such a computer could be. Indeed his model proved so useful and elegant that it has provided the standard definition of computability – *Turing Machine computability* – ever since (so people now discuss, for example, whether a quantum computer might be able to achieve things that a Turing Machine could not). What makes the notion of computability so interesting, moreover, is Turing's own discovery that there are some things which are incapable of computation, including problems that are well-defined and understood, and indeed of real practical significance. Thus it is not logically possible – however clever we might be at programming – to write a computer program which can reliably distinguish between programs that halt, and those that "loop" forever. This is a stunning result, and it has since been followed by many others in the fascinating study of computability, which was given birth by Turing's historic paper.

**Annotated Turing**

*Charles Petzold's book makes Turing's 1936 paper accessible to the non-specialist reader*