One reason that the Turing test is no longer considered to be a meaningful
measure of intelligence is that an eerie appearance of intelligence can be
produced with relative ease. A well-known example arose as a result of the program
DOCTOR (a version of the more general system called ELIZA) developed
by Joseph Weizenbaum in the mid-1960s. This interactive program was designed
to project the image of a Rogerian analyst conducting a psychological interview;
the computer played the role of the analyst while the user played the patient.
Internally, all that DOCTOR did was restructure the statements made by the
patient according to some well-defined rules and direct them back to the patient.
For example, in response to the statement “I am tired today,” DOCTOR might
have replied with “Why do you think you’re tired today?” If DOCTOR was unable
to recognize the sentence structure, it merely responded with something like
“Go on” or “That’s very interesting.”
Weizenbaum’s purpose in developing DOCTOR dealt with the study of natural
language communication. The subject of psychotherapy merely provided an environment
in which the program could “communicate.” To Weizenbaum’s dismay,
however, several psychologists proposed using the program for actual psychotherapy.
(The Rogerian thesis is that the patient, not the analyst, should lead the discussion
during the therapeutic session, and thus, they argued, a computer could
possibly conduct a discussion as well as a therapist could.) Moreover, DOCTOR
projected the image of comprehension so strongly that many who “communicated”
with it became subservient to the machine’s question-and-answer dialogue.
In a sense, DOCTOR passed the Turing test. The result was that ethical, as
well as technical, issues were raised, and Weizenbaum became an advocate for
maintaining human dignity in a world of advancing technology.
More recent examples of Turing test “successes” include Internet viruses that
carry on “intelligent” dialogs with a human victim in order to trick the human
into dropping his or her malware guard. Moreover, phenomena similar to Turing
tests occur in the context of computer games such as chess-playing programs.
Although these programs select moves merely by applying brute-force techniques
(similar to those we will discuss in Section 11.3), humans competing
against the computer often experience the sensation that the machine possesses
creativity and even a personality. Similar sensations occur in robotics where
machines have been built with physical attributes that project intelligent characteristics.
Examples include toy robot dogs that project adorable personalities
merely by tilting their heads or lifting their ears in response to a sound.