mind, matter, meaning and information

the turing test

In a paper written in 1950 Alan Turing focused on the question “Can machines think?” Feeling, however, that there were difficulties with the definition of thought, he suggested a simple test to which a computer might be subjected. The essential feature of this test is the ability to answer questions. Other factors are made irrelevant by allowing communications only through a teletype (the modern equivalent of which is a computer terminal—basically, a keyboard and screen). The judge types a question, and can read the response. As far as she knows, the entity responding may be either a person or a machine, and she has to guess which. If, after five minutes or so, the judge decides that she is dealing with a person where in fact her correspondent is a machine, it has passed the test. This has become known as the Turing Test.

In 1991 the first annual Loebner Prize Competition was run at The Computer Museum in Boston, Massachusetts. A number of identical terminals were set up, some connected to other terminals within the Museum at which human “confederates” were waiting to respond to enquiries, while the others were connected to computers at other sites running the programs that had been entered in the competition. (Programs were judged against each other, as well as against the confederates.) However, the organisers did not believe that, given the state of the art, any program could compare with a person in an unrestricted competition, so each entrant was allowed to stipulate a topic on which their program would converse. The confederates also, of course, operated within a similar restriction. The terminal to which the winner was connected was labelled “Whimsical Conversation,” and this program actually managed to convince a majority of the judges that it was human.

Anyone who has played with one of the computer games of this type (some of which have been around for quite a while) will appreciate the advantage of such a topic. In the circumstances, it should not be too surprising that so many judges were fooled. In fact, the first recorded case of something like this occurred many years previously. [note 1]

Now, the obvious objection to the Turing Test is that it does not tell us anything about what is really going on within the entity concerned. To say that whatever passes the Test must therefore think, is to propose a definition of thought that seems to conflict with the way that concept is normally used: we tend to feel that thinking is an “internal” process, that may or may not have externally observable consequences.

The question “Can machines think?” Turing believed to be “too meaningless to deserve discussion.” (ibid., page 57) Thought was not sufficiently well defined. On the other hand, he also believed that “at the end of this century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” (ibid.) And he obviously considered his Test to be significant in this context: though there is no point in debating whether machines really can think, by the end of the century not only (a) will computers capable of passing the Test exist, but (b) they may be spoken of as “thinking machines.”

Claim (a) is now, as we have passed the date Turing had in mind, ruled out. However, it should not be overlooked that machines have managed to convince a good proportion of reasonable people that they were people, even if the circumstances were somewhat contrived.

In traditional philosophical fashion, Turing suggested a number of hypothetical arguments against his position, and then attempted to refute them. The one most relevant here, he called The Argument from Consciousness. This argument, he felt, had been very well expressed by a Professor Jefferson:

Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain--that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants. (ibid., page 60)

The relationship between thinking and consciousness is considered elsewhere [not yet online]. For the sake of this particular argument, let us just follow Turing in taking these to be very closely related, if not synonymous. Turing's reaction to the suggestion that we should doubt the consciousness of a machine that acts as if capable of thought, is rather interesting: if we doubt such a machine, we should also doubt other people, because we have in fact no real evidence for their being conscious, either.

According to the most extreme form of this view the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsistic point of view. [ibid. Emphases in the original.]

Turing is simply wrong here, because to know that someone thinks and/or feels, there is no need to be that particular man—we know that people think because we are people ourselves. For reasons about which we can only speculate, but which might not be unconnected with his difficult personal life, Turing focuses on the individual and neglects the facts that we are all the same type, so we naturally assume that others think and feel, as we do—he seems unaware of how much we share. A machine might be intelligently designed to behave like a person, but people are necessarily, intrinsically like other people. In saying all that matters is that machines will be spoken of, and thought of, as thinking, Turing implies that our knowledge of other minds is subjective, but it is neither that, nor objective—it is intersubjective.

Copyright © 1998--2005 by Robin Faichney.
This material is subject to the Open Publication License — please see here.
Last modified Monday, April 02, 2001 by Robin Faichney .