Friday, December 26, 2008

Exploring the Chinese Room

Monday, in my post about the Turing test, I briefly explored John Searle’s thought-experiment “The Chinese Room.” Today, I would like to delve further into this interesting topic.

First, I would like to better explain the argument itself—I feel did something of a shoddy job of doing so in Monday’s post.  Rather than explain it myself, I will quote Searle’s description of the thought-experiment from his paper, “Minds, Brains, and Programs.” Unfortunately his description is a bit lengthy:

Suppose that I'm locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I'm not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles.

Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that 'formal' means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch "a script," they call the second batch a "story. ' and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions." and the set of rules in English that they gave me, they call "the program."

Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view that is, from the point of view of somebody outside the room in which I am locked -- my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese.

Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. From the external point of view -- from the point of view of someone reading my "answers" -- the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.”

Searle’s point is obvious: In the proof, he is manipulating Chinese symbols without true semantic understanding of what they mean. This, he argues, is what computers do: they simply carry out “the program” without having true understanding of what they are doing. It is important to note that Searle is not a dualist—he does not believe the human mind has any kind of non-physical component. He concedes that the human brain is simply a biological “machine,” and that an artificial mind could hypothetically be constructed. Searle is trying to prove that a computer program can never create a true “mind” because computer programs are in scripts that have syntax but no semantics. Essentially, Searle is challenging the computational theory of the mind: the idea that human beings cannot be explained in terms of input/output (note how similar this is to philosophical determinism).  

Also, I should mention that though I had never heard of the Chinese Room argument until the other day, it is one of the most important issues in cognitive science and philosophy today. In fact, the influential computer scientist Patrick Hays even joked that cognitive science should be renamed “the ongoing research program of showing Searle's Chinese Room Argument to be false.” There are an enormous number of responses to the argument, and unfortunately I do not have time to cover them all today. However, I would like to look at the implications of Searle’s argument and at some of the more convincing responses. 

Many philosophers and scientists have looked at what the Chinese Room thought-experiment implies, including John Searle himself. Searle created the following proof from his thought-experiment:

Axiom 1: Computer programs are formal and syntactic.

Axiom 2: Minds have mental, semantic contents.

Axiom 3: Syntax is not enough to create a semantic mind.

Conclusion: Programs are “neither constitutive of nor sufficient for minds.”

Searle’s conclusion is intuitive enough, given the data he is starting with. Axioms one and two are pretty obvious—1 states that computers have no true understanding of things, and 2 states that human minds do. Axiom 3 is what the Chinese Room proves—the fact that a computer can pass a Turing test without true understanding (at least, according to Searle). However, as I mentioned, the Chinese Room has attracted thousands of intellectual critics, and there are a multitude of responses to the proof from various areas of science. These responses attack Searle’s axioms, his conclusion, and the validity of the thought-experiment itself. I would like to take a few moments to explore some of these claims. 

The first is the “systems” response. This states that even though the man in the room does not understand Chinese, the man, the room, and the program as a system do. However, Searle’s reply is that it is possible for the man to memorize the program, making him the entire system even though he still has no understanding of Chinese characters. The “systems” reply is that the mind is virtual mind, which has a variable physical component. (For example, the software of a computer is a virtual machine) Thus, there is an “implementation independent” virtual mind at work. Searle, however, would maintain that such a virtual mind is still a syntactic simulation incapable of cognitive understanding.

Other responses are related to so-called appeals to reason. For example, a “program” to do what Searle is suggesting would be enormously complex, and it may require a whole new kind of programming. However, I will not even address these because they are insignificant—the Chinese room is a hypothetical case, after all. 

So, what is the final verdict on Searle’s Chinese Room? I don’t have one. Searle’s proof seems legitimate, but several of its aspects remain unproven, as many of the responses show. I promise to revisit the Chinese Room soon, since it is such an important and influential argument. For now, all I can say is that since the Chinese Room resides in the grey area between science and philosophy, someday either experimentation or logic may yield the answer. 

3 comments:

Ross said...

extremely interesting topic, bill. I had never heard of this either until you just brought it up. I love the concept of the Chinese Room, and I think Searle's take on it is reasonable. However, as you said, there are holes in the theory as a whole.

regardless, I think I agree with what he says on the possibility of constructing a human brain. I could have said that better.

of course, for the time being, and perhaps for the near future, such technology to reconstruct a brain is far from humanity's grasp. however, such technology may not exactly be technology; it would require us to provide computers with intangibles, and that would be impossible to do without proving Searle's theory, since everything would have to come from programming anyway.

and I don't know if you mentioned it, but this was from around 1980, no?

Ross said...

that was brett-i forgot to sign out of my brother's thing.

Bill said...

According to Searle, the only way to make computers think is to give them mechanical "brains." His point is that programming languages cannot give computers the ability to think.

And yes, this is from around 1980, which I find amazing, because programming languages were still in their infancy. Equally amazing is that the idea of the Turing Test comes from the 1950's.