consider the points made by Schank in the light of existing (unintelligent)
software.
Games were not only among the first programs to appear on the microcomputer
software market, they were also among the first productions of AI
researchers interested in problem solving. That was because the rules and
elements of games constitute finite sets and thus proved relatively easy to
programme. Games are fun, whether played on computers or otherwise.
Because they are the most popular type of software, many of us identify
games with microcomputers: but this does not guarantee that every kind of
interaction with computers is fun.
In fact, much of the current teaching software is a source of frustration for
the student-user in that communication with the machine is often awkward
(Nievergelt et al. 1986). Jenny Thomas points out that, ‘In my experience, it
is much more likely to be the semi-(computer)illiterate teachers who are
enthralled by CALL software than the average blasé teenager!’ (Thomas
1986:117).
Computers can be programmed to teach more thoroughly and interactively
than textbooks, but this statement implies that, whereas books are
regarded as one medium at the teacher’s disposal along with some others,
the computer is viewed as a surrogate teacher. Indeed, computers could do
things no other media can, such as keeping track of students’ progress,
choosing grades of difficulty and providing immediate feedback to the
students’ responses to exercises. To this extent, it could take over a part of
the teaching process which has so far been reserved for human teachers.
But as it stands, Schank’s (1984) statement simply does not reflect the
prevailing state of affairs in CALL.
Most CALL still consists of rather tedious drill activities, cloze type
exercises, or multiple-choice questionnaires. These activities stand in
strong contrast to the generally accepted concept of communicative teaching,
which sheds light on the limits and perils of repetitive drills in language
learning and teaching.
Those programs which do allow for creativity and initiative on the
student’s behalf all have a very restricted domain of discourse, meaning
that they model part of a complex system which has been arbitrarily
reduced to a finite set of objects, as for example in Higgins’ (1985) ‘Grammarland’
program.
Computers do not get annoyed, frustrated, or depressed. They don’t
punish, insult, or judge students. Consequently, timid students who are
often afraid of making a fool of themselves in the classroom or of being
judged by their teacher are expected to overcome their self-consciousness
when interacting with the computer.
Self-conscious students may feel more at ease in front of a computer
monitor than in the classroom, but many people do not enjoy interaction
with computers. They lack the typing skills necessary to work at a reasonable
pace. They are apprehensive about what the machine does with their
input (keeping track of their mistakes!). They miss the warmth of human
interaction. In fact, though this may be only transitory, many students may
prefer human teachers who get bored, red-herringed, and frustrated to a
machine which never makes a mistake and which sticks adamantly to the
curriculum.
Finally, the computer’s greatest drawback for FLT applications is that it
does not have a voice (or has an awful one), and that it cannot cope with
oral input; much less with correct pronunciation.