Can Computers Think?

The Turing Test, famously introduced in Alan Turing's paper "Computing Machinery and Intelligence" (Mind, 1950), was intended to show that there was no reason in principle why a computer could not think. Thirty years later, in "Minds, Brains, and Programs" (Behavioral and Brain Sciences, 1980), John Searle published a related thought-experiment, but aiming at almost exactly the opposite conclusion: that even a computer which passed the Turing Test could not genuinely be said to think. Since then both thought-experiments have been endlessly discussed in the philosophical literature, without any very decisive result.

Good discussions of each can be found in the online Stanford Encyclopedia of Philosophy:

See also the "Turing Test" section of the annotated bibliography from the Association for the Advancement of AI (which also has some material on the Chinese Room).

Just as in the Thinking Matter controversy of the 17th and 18th centuries, most of this modern discussion has tended to be entirely aprioristic, as though questions about the possibilities of computing machines can be fully understood without any experience of implementing them or their resulting behaviour. But this assumption seems to be extremely questionable: we almost invariably find, on investigating complex systems, that our "intuitions" about them are moulded and refined by that experience. Moreover inexperienced humans are typically very poor at judging where the boundaries of possibility lie (imagine a mathematical novice wondering how one could possibly prove that no fraction of integers can be equal to the square root of two, or a probabilistic or physics novice wondering how one could possibly prove the impossibility of local deterministic "hidden variables" underlying the apparent randomness of quantum mechanics, as shown by Bell's Theorem). Pure thought experiments – without any possibility of detailed analysis or genuine experimentation – can be quite hopeless in these circumstances. To illustrate with a parody of Searle's line of argument, we might argue like this:

Imagine that I were to write a 100-line computer program that conversed in perfect English, in such a way as to pass the Turing Test. Surely we wouldn't call such a simple program genuinely "intelligent". Hence passing the Turing Test cannot be sufficient for genuine intelligence.

The obvious fallacy here is that no 100-line computer program could possibly be anything like that powerful, so the hypothetical experiment is useless. And likewise, it might be suggested, there is no value in trying to draw conclusions about possibilities from a pure thought experiment like Searle's Chinese Room, until we have some appreciation of what sort of processing would be required to make it feasible. If it turns out that the processing would have to be fantastically sophisticated, involving abstract models of the situations being conversed about, sensors connecting those models to the outside world in appropriate ways, powerful information-processing algorithms and so forth, then we might well decide that our initial "instincts" about the thought experiment had turned out to be quite wrong. The moral is: philosophers cannot expect to discover anything significant from computer thought experiments unless they are prepared to investigate computing in some detail, and thus learn for themselves a significant amount about what the systems they are considering would involve.

Some feel for the difficulty of implementing a system to produce "intelligent" conversation, even of a very limited sort and in a limited domain, can be acquired by playing with the Elizabeth educational chatterbot, which is hosted on this site.

John Searle

John Searle

Devised the Chinese Room thought experiment