Monday, December 22, 2008

Musings on the Turing test (part 1)

In 1950, philosopher and computer scientist Alan Turing began to explore the philosophic implications of computers, specifically the problem of machine “intelligence.” Turing asked whether machines can ever obtain true intelligence or consciousness, and if they can, how do they differ from human beings (besides physically)? Turing published the following in a paper:

“It is not difficult to devise a paper machine [computer] which will play a not very bad game of chess. Now get three men as subjects for the experiment. A, B and C. A and C are to be rather poor chess players, B is the operator who works the paper machine. Two rooms are used with some arrangement for communicating moves, and a game is played between C and either A or the paper machine. C may find it quite difficult to tell which he is playing.”

What Turing is saying is that in this case a computer is indistinguishable from a human. To solve this problem, Turing developed the Turing test (named after himself), which is a hypothetical written test that can distinguish between a human being and a computer. Many versions of the test have been created, covering a variety of subjects. In fact, contests have been held, in which programmers attempt to create computer programs that can pass the Turing test (or at least appear to pass it according to a certain percent of judges).

Despite its popularity, the Turing test is often criticized. One of the most compelling arguments against it is the thought-experiment “the Chinese Room,” devised by John Searle in 1980. Searle argues that a computer could answer all of the questions correctly but still not have true intelligence, which is what the test is really meant to discover. In other words, the computer could answer the question simply by using a complex series of decision algorithms (to anyone who knows Java, think nested “if” statements). Thus, the computer is simply manipulating ideas in the way a non-Chinese speaking person can manipulate Chinese letters—they can answer a question in written Chinese without actually understanding what they are saying. This brings up a slew of complicated questions, including determinism, philosophy of mind, and the problem of consciousness.

First, determinism and the computational theory of the mind. This essentially means that human minds are computers in that we just take in data and process it in the same way computers do, and we have no “understanding” of concepts more than computers do. If this true, computers will eventually be able to pass the Turing test; all they have to do is mimic the algorithms the human mind uses. However, many philosophers believe in dualism, the idea that the mind has a non-physical component, or something like a soul. In this case, computers will never be able to pass the Turing test, as a non-physical mind would truly have free will, which a computer cannot mimic.

The problem of consciousness also comes into play, since this is another aspect of the human mind a computer may or may not be able to copy. This depends on the nature of consciousness—if it simply stems from the human brain having a huge number of neurons, there is hope for computer consciousness yet. But if it comes from a non-physical source such as the soul, computers will never be able to achieve consciousness as we know it. Also, the relationship between consciousness and self-awareness come into play here: Unless consciousness is defined as simply self-awareness, computers may be able to achieve self-awareness without achieving true consciousness.

If we ignore these philosophical problems for a moment, though, as follow the “Chinese room” theory that computers may be able to pass the Turing test even if they are truly “intelligent,” we can examine the problem more practically. Many computer scientists have predicted that computers will soon be able to pass the Turing test because of future advances in computing power. Moore’s Law holds that the number of transistors in a integrated circuit will double every two years, which means an exponential increase in computing power. So far, computer science has followed this pattern, However, many intellectuals argue that eventually this will break down because there is a point at which it is almost impossible to make smaller transistors. Moore himself stated that he doubts that the law will continue forever. Though some believe that quantum computers will be developed enough to replace circuits by the time this happens, this will also mean that Moore’s law no longer holds true because it only applies to integrated circuits.

However, it is clear that computers are going to undergo huge increase in processing power, whether they follow Moore’s Law or not. If quantum computers eventually become a reality, the amount of computing power available is going to be enormous. With all this “intelligence” at a computer’s fingertips, the Turing test as we know it will soon become obsolete, as computers will be able to immediately determine the “human” answer to any Turing test question with a low probability of error.

This aspect of the implications of the Turing test is a popular subject of debate among intellectuals. Two prominent philosopher/futurists, Mitch Kapor and Raymond Kurzweil, have placed a $10,000 bet on whether computers will be able to pass a Turing test by 2029. Check out this link for their arguments and the conditions of the bet. 

Another time, perhaps, I will review Kapor and Kurzweil’s arguments. I have barely scratched the surface on this topic, so I will almost certainly discuss it again. 

3 comments:

Matt C. said...

I feel that part of intelligence is conciousness. Isn't conciousness just absorbing information about the situation and analyzing it. That sounds like something computers can do. agree or disagree?

Andrew said...

I believe Matt brings up a good point here, what exactly is consciousness? Is it simply how we react to situation because that's how the neurons in our brain function due to evolution? Or is there really a "soul" or some unseen and intangible part of our psyche? Obviously we can debate this for as long as we want and still not come any closer to an answer. Without experiencing life without our "consciousness", it will be difficult to prove we even have it in the first place.

Do I think computers will become more "intelligent"? I believe they will become more efficient and precise when analyzing a situation than the human mind and will act without the obstruction of human limits and emotions. They will be able sift through information faster than any human could. Does this equal intelligence? Maybe it does, maybe it doesn't.

Bill said...

It depends on how you define consciousness. Most people define it as self-awareness plus the cognitive abilities you mentioned. Computers are marginally good at analysis, but they are doing it in the "Chinese Room" way, without really understanding it.(Unless you believe in a deterministic universe in which case even we cannot truly "analyze" anything.) As for self=awareness, we are not even close to creating a computer that can do that. Though they may eventually become that sophisticated (whether per Moore's Law or not) they certainly can't do it now.