Friday, January 2, 2009

Musings on the Turing Test (part 2)

Happy New Year to all! I hope that 2009 is a better year for all than 2008. Sadly, the Gaza crisis is still going on, but I hope that it will soon be resolved. However, I will not be addressing that particular issue today, as I would like to talk more about the Turing test.

The Loebner Prize is a contest held every year, in which contestants try to program a computer to pass the current year’s version of the Turing test. The winner is the program that manages to appear “human” to the greatest number of examiners. In 1991, there was some controversy over the winner; the computer program considered “most convincing” fooled many examiners because it was programmed to make typing errors. Since then, the Loebner Prize has focused on “chatterbots”—computer programs that simulate a conversation (typed of course).

This brings me to the point I would like to discuss today: the so-called concept of “artificial stupidity.” This is the idea that computer programs must be made to make errors in order to appear human. This idea is not something new; even Alan Turing in 1948 realized that a computer that appears perfect cannot pass as human:

“It is claimed that the interrogator could distinguish the machine from the man simply by setting them a number of problems in arithmetic. The machine would be unmasked because of its deadly accuracy.”

Turing’s point is clear: to appear human, machines cannot be perfect.  This is evident in the Loebner Prize winners of both 1991 and 2008. In fact, after looking at the 2008 transcripts, I realized that all of the top 5 programs committed errors on purpose. (These transcripts are available here). Also, many of the more successful ones delayed their responses by an amount of time proportional to the number of words they “typed,” as an instantaneous response is suspicious. This, too, is a form of “artificial stupidity.”

What does this mean for us? For the layman, very little. For the computer scientist, though, it more clearly defines the challenge of making computers seem human. This challenge no longer consists of simply making computers smarter, as it did 30 or 40 years ago; now, it consists of making computers imitate all the nuances of human beings. This is probably a much harder task, but I have no doubt that computers will eventually get there. When we reach that point, we are going to have to ask ourselves some serious questions about our humanity. Until then, all we can do is wait.

**As a side note: Last year’s winner of the Loebner Prize, a program called Elbot, can be “talked to” on the creator’s website. I conducted a half-hour conversation with it, and what I found was startling—the program is able to have a perfectly normal-sounding dialogue on almost every subject. I strongly recommend trying it out for yourself; the link can be found here.  

2 comments:

steve y said...

You call that dialogue normal? It will occasionally provide a response that makes perfect sense, but most of the time it doesn't really respond like a human would at all.

Bill said...

True, it is a long way from sounding natural. But it is surprisingly fluid-sounding.