Tuesday, May 5, 2009

The Turing test and the Chinese Room

John mentioned the Turing test in comments on my last post, so I thought I'd say a few words about it. The Turing test was Alan Turing's attempt to define objective criteria for answering the question, "Can machines think?". (See Turing's 1950 paper, "Computing Machinery and Intelligence".) Essentially, the criterion Turing proposed was that a machine is intelligent if it can persuade an external observer that it's intelligent.

In this post, I want to summarize one of the most influential critiques of the Turing test, and explain what I think is right about it. The critique is from John Searle's 1980 paper, "Minds, Brains, and Programs", and it's known as the "Chinese Room" argument.

1. THE CHINESE ROOM
Searle proposes the following thought-experiment: imagine you've been placed in a room with a large pile of papers printed with characters in a language you don't understand—call it Chinese. Through a slot in the wall, someone occasionally inserts some more pieces of Chinese writing. You've been provided with a detailed set of rules (in English) for correlating one set of papers with the other, based only on the shape of the Chinese symbols. No prior knowledge of Chinese is required to follow these rules: you just look up the symbols on the papers that come through the slot, choose the characters that the rule-book calls for from your stock-pile, and push these papers out of the room through the slot.

Unbeknownst to you, the papers being inserted into the room are questions written by native Chinese speakers, and the papers you're pushing out of the room are answers to these questions. The set of rules you're following is so sophisticated that to those outside, the room (or whatever's inside it) appears to be carrying on a perfectly fluent conversation in Chinese. You are equally unaware of the fact that those who designed the room and wrote the rules that you're following consider the papers that come in through the slot “input,” the papers you push through the slot “output,” and the rules you're following a “program.”

This “Chinese Room” is a computer, albeit a strange one: instead of magnetic memory and a CPU made of silicone transistors, it's built out of stacks of paper and a human being. Nevertheless, the room functions in the same way that a digital computer does: it manipulates and responds to symbolic input according to purely syntactic rules.

2. INTROSPECTIVE ILLUSIONS?
The strange computer in Searle's thought-experiment is passing the Turing test with flying colors: it appears to Chinese-speaking observers to be fluent in Chinese, and to understand the questions they are putting to it. The question is, is this appearance of understanding sufficient to show actual understanding?

Searle argues that it is not, on the grounds that the human being inside the room doesn't understand a word of the conversation she is participating in. As Searle puts it, "whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything."

This argument has been met with a number of objections. Defenders of the Turing test have generally argued that Searle hopelessly muddies the issues by inserting a human homunculus into the workings of his Chinese-speaking computer. The Chinese Room argument seems to rely on vague, introspective intuitions about what and how we understand, intuitions that may or may not be empirically accurate. The whole point of the Turing test is to avoid such subjective definitions of understanding. According to its defenders, the only objectively valid criterion is the results the system produces; if a computer’s performance of a given task is indistinguishable from that of a human being, then the computer understands the task just as well as the human does, regardless of what it "feels like" for either.

3. THE PROBLEM OF MEANING
The answer to this critique, and the true strength of the Chinese Room argument, lie in a further point that Searle makes about the meaning of the computer’s inputs and outputs: “the formal symbol manipulations by themselves… are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics.”

The computer only appears to be thinking, to know and understand things about the world, because its inputs and outputs are symbolic, and thus appear to have a meaningful content. But symbols have no content in themselves, for in themselves they are not even symbols, but only things—ink on a page, or colored pixels on a screen. They are meaningful only for beings who can interpret them as symbols, and find a meaning in them. Searle’s crucial point about the computer is that it is not such a being. The Chinese Room’s inputs and outputs appear to its human observers to be meaningful Chinese sentences, but they have no such meaning for the computer itself.

The “correctness” of the computer’s outputs, its apparent fluency in Chinese, lies entirely in the interpretation given to these outputs by its human interlocutors. The Chinese Room itself is utterly incapable of distinguishing between correct and incorrect outputs, since for it these outputs are nothing but physical effects of physical inputs, the end of a complex chain reaction. In our attempt to give an objective definition of understanding, we have ended up attributing to the computer properties that are only in the eye of the observer.

This can be seen even more clearly if we imagine that, instead of “conversing” with human interlocutors, the Chinese Room exchanges inputs and outputs with another, identical Chinese Room. There should be no temptation, in this scenario, to say that Chinese is being spoken or understood. There is here only a mechanical exchange of inputs and outputs, one computer triggering an automated response in the other, in a closed feedback loop. This is not a conversation.

Defenders of the Turing test might insist, of course, that to a (human, Chinese-speaking) observer it is indistinguishable from a real conversation. But this would again be to import into the situation an outside observer for whom the signs being exchanged are meaningful. The Chinese Rooms are completely incapable of generating meaning on their own. As Searle puts it, you cannot get semantics from syntax.

4. GENERAL CONCLUSIONS
We see here a general problem with attempts to give an “objective” account of understanding or subjectivity. Turing-types are right to criticize introspective accounts, which would reduce the meaning of my situation to the meaning it has for me. However, we are no better off if we exchange the introspective standpoint for a purely external one. We will then arrive only at a description of what the situation means to the observer, when the whole problem was to describe (objectively) what it means to the system being observed.

If we take the observer’s perspective for granted then we only postpone the problem we set out to solve, for the observer is also a thinking being, and her perspective must also be accounted for. The claim that computers are thinking can only be sustained by appealing illicitly to the perspective of an observer who is not a computer, whose thought is more than an algorithm.

No comments: