Now consider the artificial neural network shown in Figure 11.21.
Following the conventions used to depict artificial neural networks, each circle
in the figure represents a neuron whose threshold value is recorded inside the
circle. Instead of arrows, the lines connecting the circles represent two-way
connections between the corresponding neurons. That is, a line connecting
two neurons indicates that the output of each neuron is connected as an input
to the other. Thus the output of the center neuron is connected as an input to
each of the neurons around the perimeter, and the output of each of the neurons
around the perimeter is connected as an input to the center neuron as
well as an input to each of its immediate neighbors on the perimeter. Two connected
neurons associate the same weight with each other’s output. This common
weight is recorded next to the line connecting the neurons. Thus the
neuron at the top of the diagram associates a weight of 1 with the input it
receives from the center neuron and a weight of 1 with the inputs it receives
from its two neighbors on the perimeter. Likewise, the center neuron associates
a weight of 1 with each of the values it receives from the neurons around
the perimeter.
The network operates in discrete steps in which all neurons respond to their
inputs in a synchronized manner. To determine the next configuration of the network
from its current configuration, we determine the effective inputs of each
neuron throughout the network and then allow all the neurons to respond to
their inputs at the same time. The effect is that the entire network follows a coordinated
sequence of compute effective inputs, respond to inputs, compute effective
inputs, respond to inputs, etc.
Consider the sequence of events that would occur if we initialized the network
with its two rightmost neurons inhibited and the other neurons excited (Figure 11.22a).