mind, matter, meaning and information


the turing machine

To understand why Turing came up with the Turing Test, we should consider what is probably his main contribution to computing science: the Turing Machine. This is a conceptual machine, not a concrete one, though implementations of it do now exist—in fact, every general purpose digital computer is a Turing Machine—but Turing published his paper in 1936, about 10 years before the first such computer actually appeared, and 15 years before the Turing Test paper. What he achieved with the earlier paper was to pin down in precise terms the fundamental nature of information processing. That is what the Turing Machine does—unlike the older sort of machine, which manipulated matter, it manipulates information. The typical modern computer, being programmable, is in fact a universal Turing Machine: it can, in principle, perform any information processing procedure that its programmer cares to specify (provided, of course, that the procedure is sufficiently well defined).

When Turing turned his attention to thought, he thought of it as information processing. Our linguistic ability, in particular, is very well suited to such a view. We take in one stream of symbols, whether encoded as sounds or as marks on paper or on a screen, and may respond with another, similar stream, perhaps via speech or by typing on a computer keyboard. The first stream is operated upon, in the context of our “internal state”—general knowledge, etc.—to produce the second one; and that is just how the Turing Machine operates.

Now, it does not matter how a Turing Machine is designed, or what it is made of, as long as it has the capacity to process information. Any two machines that, given the same input information, will produce the same output information, are exactly equivalent, whether they use mechanical cogs and levers, solid state electronics, or networks of neurons. The actual mechanism involved is no more relevant than is the difference between reading a document printed on paper or on a computer screen. What matters is not the material thing, but its functionality. So the digital computer and the human brain are computationally equivalent—the computer, given enough memory and processing power, could be programmed to respond linguistically just like a person—and Turing believed we might as well say that such a machine thinks.

The classic (though far from universally accepted) response to the Turing Test is the Chinese Room. A philosopher called John Searle conjured up this scenario to show that a system that is quite capable of responding as if it understands a language, might nevertheless lack any real understanding whatsoever. Thus, a machine could pass the Turing Test without understanding.

We have to imagine a room, or a large box, within which there is perhaps a desk and chair, and some book shelves and filing cabinets, and a person. There is also a letterbox-type arrangement, by which that person can communicate with the outside world. Now, every so often a piece of paper is dropped into the letterbox, with something written on it in Chinese. The person does not know any Chinese whatsoever, but they have instructions in books or files or whatever on the processing of Chinese input to give Chinese output, and when the string of symbols on the paper has been processed to give a different string of symbols on another piece of paper, that is pushed out through the letterbox.

The Chinese Room is thus capable of correctly answering questions written in Chinese, even though the person operating it has no knowledge of the language. (Please bear in mind that this is a “thought experiment,” designed to elucidate certain principles, so practicalities such as speed of execution are irrelevant.) According to Searle, because the operator is merely following an explicit set of instructions and does not know what the symbols actually mean, the Room is essentially a machine (the instructions being the program), and though capable of competent linguistic performance, has no understanding of what it is doing.

Like Turing, Searle suggested a number of objections that people might make to his argument, and attempted to answer them. One such objection was The Systems Reply: though the person does not understand Chinese, the system as a whole does—after all, its performance is indistinguishable from that of a Chinese-speaking person—so why should we discriminate against machines?

For me, Turing made a great contribution to the understanding of the mind when he identified it as a processor of information, and a “high level” entity, that is, one that in a sense transcends mere matter. (Though I try to keep an open mind, I do tend not to believe in the supernatural.) But his account does, as Searle pointed out, omit a seemingly essential aspect: the subjective element, what it “feels like” to think. And The Systems Reply—though I agree that understanding is a high level phenomenon, a property of systems—is guilty of the same omission: both thinking, according to Turing, and understanding, according to proponents of the Reply, should be defined so as to describe the performance of a system, regardless of what might or might not be going on within it.

The relationships between thinking, understanding and consciousness were perhaps taken rather forgranted by both Turing and Searle, but that need not worry us too much, as it is clear that the major problem concerns subjectivity, objectivity, and intersubjectivity. In the previous paragraph it is stated that Turing omits the subjective aspect of thinking, what the experience of doing it is actually like. That may seem to contradict what I say in the turing test: that he implies our knowledge of other minds is subjective. However, the subjectivity Turing believes in is that of the attributor of consciousness—in other words, attribution is merely a matter of opinion—while the subjectivity in which he does not believe is that of the attributee, the person (or thing) that is being judged conscious — or at least he considers their experience irrelevant. In the paradigm case, though, i.e. when the attribution of consciousness is correct, both sorts of subjectivity are required, those of the attributor and the attributee, which is why it is neither subjective nor objective, but intersubjective.



Copyright © 1998--2005 by Robin Faichney. This material may be distributed only subject to the terms and conditions set forth in the Open Publication License, v1.0 or later (the latest version is presently available at http://www.opencontent.org/openpub/). Distribution of substantively modified versions of this document is prohibited without the explicit permission of the copyright holder. Distribution of the work or derivative of the work in any standard (paper) book form is prohibited unless prior permission is obtained from the copyright holder.
Last modified 23-Feb-2005 14:36:17 by Robin Faichney .