Chinese room

The Chinese Room argument is a thought experiment and associated arguments designed by John Searle (1980 ) as a counterargument to claims made by supporters of what Searle called strong artificial intelligence (see also functionalism).

The argument is that a computer cannot have understanding, because human beings, when running computer programs by hand, do not acquire understanding. His arguments are taken very seriously in the field of philosophy, but are regarded as invalid by many scientists, including those outside the field of AI.

Searle's Philosophical Argument
Searle laid out the Chinese Room argument in his paper "Minds, Brains, and Programs," published in 1980. Since then, it has been a recurring trope in the debate over whether computers can truly think and understand. Searle argues as follows:

Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), produces other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion that proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.

Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in an enormous room in which he receives Chinese characters, consults a rule book, and processes the Chinese characters according to the rules. Searle notes that he doesn't, of course, understand a word of Chinese. he simply manipulates what to him are meaningless squiggles, using the rules and whatever other equipment is provided in the room, such as paper, pencils, erasers, and millions of meticulously cross referenced filing cabinets.

After countless eons in which Searle is manipulating symbols, Searle will produce the answer in Chinese. During all this time, he has never learned Chinese. So Searle argues that his lack of understanding goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.

History
In 1980, John Searle published "Minds, Brains and Programs" in the journal Behavioral and Brain Sciences. In this article, Searle sets out the argument, and then replies to the half-dozen main objections that had been raised during his presentations at various university campuses (see next section). In addition, Searle's article in BBS was published along with comments and criticisms by 27 cognitive science researchers. These 27 comments were followed by Searle's replies to his critics.

Over the last two decades of the 20th century, the Chinese Room argument was the subject of many discussions. By 1984, Searle presented the Chinese Room argument in a book, Minds, Brains and Science. In January 1990, Scientific American took the debate to a general scientific audience. Searle included the Chinese Room Argument in his contribution, "Is the Brain's Mind a Computer Program?" His piece was followed by a responding article, "Could a Machine Think?", written by Paul and Patricia Churchland. Soon thereafter Searle had a published exchange about the Chinese Room with another leading philosopher, Jerry Fodor (in Rosenthal (ed.) 1991).

The heart of the argument is an imagined human simulation of a computer, similar to Turing's Paper Machine. The human in the Chinese Room follows English instructions for manipulating Chinese characters, where a computer "follows" a program written in a computing language. The human produces the appearance of understanding Chinese by following the symbol manipulating instructions, but does not thereby come to understand Chinese. Since a computer just does what the human does — manipulate symbols on the basis of their syntax alone — no computer, merely by following a program, comes to genuinely understand Chinese.

This argument, based closely on the Chinese Room scenario, is directed at a position Searle calls "Strong AI". Strong AI is the view that suitably programmed computers (or the programs themselves) can understand natural language and actually have other mental capabilities similar to the humans whose abilities they mimic. According to Strong AI, a computer may play chess intelligently, make a clever move, or understand language. By contrast, "weak AI" is the view that computers are merely useful in psychology, linguistics, and other areas, in part because they can simulate mental abilities. But weak AI makes no claim that computers can actually understand or be intelligent. The Chinese Room argument is not directed at weak AI, nor does it purport to show that machines cannot think — Searle says that brains are machines, and brains think. It is directed at the view that formal computations on symbols can produce thought.

We might summarize the narrow argument as a reductio ad absurdum against Strong AI as follows. Let L be a natural language, and let us say that a "program for L" is a program for conversing fluently in L. A computing system is any system, human or otherwise, that can run a program.


 * 1) If Strong AI is true, then there is a program for L such that if any computing system runs that program, that system thereby comes to understand L.
 * 2) I could run a program for L without thereby coming to understand L.
 * 3) Therefore Strong AI is false.

The second premise is supported by the Chinese Room thought experiment. The conclusion of this argument is that running a program cannot create understanding. The wider argument includes the claim that the thought experiment shows more generally that one cannot get semantics (meaning) from syntax (formal symbol manipulation).

The core of Searle's argument is the distinction between syntax and semantics. The room is able to shuffle characters according to the rule book. That is, the room’s behaviour can be described as following syntactical rules. But in Searle's account it does not know the meaning of what it has done; that is, it has no semantic content. The characters do not even count as symbols because they are not interpreted at any stage of the process.

Formal arguments
In 1984 Searle produced a more formal version of the argument of which the Chinese Room forms a part. He listed four premises:


 * 1) Brains cause minds.
 * 2) Syntax is not sufficient for semantics.
 * 3) Computer programs are entirely defined by their formal, or syntactical, structure.
 * 4) Minds have mental contents; specifically, they have semantic contents.

The second premise is supposedly supported by the Chinese Room argument, since Searle holds that the room follows only formal syntactical rules, and does not “understand” Chinese. Searle posits that these lead directly to four conclusions:


 * 1) No computer program by itself is sufficient to give a system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds.
 * 2) The way that brain functions cause minds cannot be solely in virtue of running a computer program.
 * 3) Anything else that caused minds would have to have causal powers at least equivalent to those of the brain.
 * 4) The procedures of a computer program would not by themselves be sufficient to grant an artifact possession of mental states equivalent to those of a human; the artifact would require the capabilities and powers of a brain.

Searle describes this version as "excessively crude." There has been considerable debate about whether this argument is indeed valid. These discussions center on the various ways in which the premises can be parsed. One can read premise 3 as saying that computer programs have syntactic but not semantic content, and so premises 2, 3 and 4 validly lead to conclusion 1. This leads to debate as to the origin of the semantic content of a computer program.

Replies
There are many criticisms of Searle’s argument. Most can be categorized as either systems replies or robot replies.

The system reply
Although the individual in the Chinese room does not understand Chinese, perhaps the person and the room, including the rule book and the scratch paper and contents of the filing cabinets, considered together as a system, do.

Searle’s reply to this is that someone might in principle memorize the rule book, the symbols on the scratch paper, and the cross referenced contents of all the millions of filing cabinets. They would then be able to interact as if they understood Chinese, but would still just be following a set of rules, with no understanding of the significance of the symbols they are manipulating. This leads to the interesting problem of a person being able to converse fluently in Chinese without "knowing" Chinese. Such a person would face the formidable task of learning when to say certain things (and learning a huge number of rules for "getting by" in a conversation) without understanding what the words mean. To Searle, the two are still clearly separate.

In Consciousness Explained, Daniel C. Dennett does not portray them as separate. He offers an extension to the systems reply, which is basically that Searle's example is intended to mislead the imaginer. We are being asked to imagine a machine which would pass the Turing test simply by manipulating symbols in a look-up table. It is highly unlikely that such a crude system could pass the Turing test. Of course, critics of Dennett have countered that a computer program is simply a logical list of commands, which could of course be put into a book and followed - just as a computer could follow them. So, if any computer program could pass the Turing test, then a person with the same instructions could also "pass" the test, except much more slowly.

If the system were extended to include the various necessary detection-systems to lead to consistently sensible responses, and were presumably re-written into a massive parallel system rather than serial Von Neumann architecture, it quickly becomes much less "obvious" that there's no conscious awareness going on. For the Chinese Room to pass the Turing Test, either the operator would have to be supported by vast numbers of equal minions, or else the amount of time given to produce an answer to even the most basic question would have to be absolutely enormous—many millions or perhaps even billions of years.

The point made by Dennett is that by imagining "Yes, it's conceivable for someone to use a look-up table to take input and give output and pass the Turing Test," we distort the complexities genuinely involved to such an extent that it does indeed seem "obvious" that this system would not be conscious. However, such a system is irrelevant. Any real system able to genuinely fulfill the necessary requirements would be so complex that it would not be at all "obvious" that it lacked a true understanding of Chinese. It would clearly need to weigh up concepts and formulate possible answers, then prune its options and so forth until it would either look like a slow and detailed analysis of the semantics of the input or else it would just behave entirely like any other speaker of Chinese. So, according to Dennett's version of the system reply, unless we're forced to "prove" that a billion Chinese speakers are all more than massive parallel networks simulating a Von Neumann machine for output, we'll have to accept that the Chinese Room is every bit as much a "true" Chinese speaker as any Chinese speaker alive.

The robot reply
Suppose that instead of a room, the program was placed into a robot that could wander around and interact with its environment. Surely then it would be said to understand what it is doing? Searle’s reply is to suppose that, unbeknownst to the individual in the Chinese room, some of the inputs he was receiving came directly from a camera mounted on a robot, and some of the outputs were used to manipulate the arms and legs of the robot. Nevertheless, the person in the room is still just following the rules, and does not know what the symbols mean.

Suppose that the program instantiated in the rule book simulated in fine detail the interaction of the neurons in the brain of a Chinese speaker. Then surely the program must be said to understand Chinese? Searle replies that such a simulation will not have reproduced the important features of the brain &mdash; its causal and intentional states.

But what if a brain simulation were connected to the world in such a way that it possessed the causal power of a real brain &mdash; perhaps linked to a robot of the type described above? Then surely it would be able to think. Searle agrees that it is in principle possible to create an artificial intelligence, but points out that such a machine would have to have the same causal powers as a brain. It would be more than just a computer program.

Related works

 * Wikibooks: Consciousness Studies
 * John Searle (1980) "Minds, Brains and Programs" -- original draft from Behavioral and Brain Sciences
 * John Searle (1983). "Can Computers Think?" in David Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings (Oxford, 2002), ISBN 0-19-514581-X, pp. 669-675.
 * John Searle (1984). Minds, Brains and Science: The 1984 Reith Lectures, Harvard University Press, hardcover: ISBN 0-67457631-4, paperback: ISBN 0-67457633-0
 * Stevan Harnad (2001) What's Wrong and Right About Searle's Chinese Room Argument in Bishop, M. and Preston, J., Eds. Essays on Searle's Chinese Room Argument. Oxford University Press.
 * Stevan Harnad (2005) Searle's Chinese Room Argument, in Encyclopedia of Philosophy. Macmillan.
 * Dissertation by Larry Stephen Hauser,
 * Searle's Chinese Box: Debunking the Chinese Room Argument. Larry Hauser. available at http://members.aol.com/lshauser2/chinabox.html
 * Stanford Encyclopedia of Philosophy on The Chinese Room Argument
 * Philosophical and analytic considerations in the Chinese Room thought experiment
 * Interview in which Searle discusses the Chinese Room
 * Understanding the Chinese Room (critical) from Zompist.com
 * A Refutation of John Searle's "Chinese Room Argument", by Bob Murphy