In addition to representing and manipulating knowledge, we would like to give
intelligent agents the ability to acquire new knowledge. We can always “teach” a
computer-based agent by writing and installing a new program or explicitly
adding to its stored data, but we would like intelligent agents to be able to learn
on their own. We want agents to adapt to changing environments and to perform
tasks for which we cannot easily write programs in advance. A robot designed for
household chores will be faced with new furniture, new appliances, new pets,
and even new owners. An autonomous, self-driving car must adapt to variations
in the boundary lines on roads. Game playing agents should be able to develop
and apply new strategies.
One way of classifying approaches to computer learning is by the level of
human intervention required. At the first level is learning by imitation, in
which a person directly demonstrates the steps in a task (perhaps by carrying
out a sequence of computer operations or by physically moving a robot through
a sequence of motions) and the computer simply records the steps. This form of
learning has been used for years in application programs such as spreadsheets
and word processors, where frequently occurring sequences of commands are
recorded and later replayed by a single request. Note that learning by imitation
places little responsibility on the agent.
At the next level is learning by supervised training. In supervised training
a person identifies the correct response for a series of examples and then the
agent generalizes from those examples to develop an algorithm that applies to
new cases. The series of examples is called the training set. Typical applications
of supervised training include learning to recognize a person’s handwriting or
voice, learning to distinguish between junk and welcome email, and learning
how to identify a disease from a set of symptoms.