A more sophisticated neuron is the McCulloch and Pitts model (MCP). The difference from the previous
model is that the inputs are ‘weighted’; the effect that each input has at decision making is dependent on
the weight of the particular input. The weight of an input is a number which when multiplied with the
input gives the weighted input. These weighted inputs are then added together and if they exceed a preset
threshold value, the neuron fires. In any other case the neuron does not fire.
In mathematical terms, the neuron fires if and only if; the
addition of input weights and of the threshold makes this neuron
a very flexible and powerful one. The MCP neuron has the ability
to adapt to a particular situation by changing its weights and/or
threshold. Various algorithms exist that cause the neuron to
'adapt'; the most used ones are the Delta rule and the back error
propagation. The former is used in feed-forward networks and the
latter in feedback networks.
4 Architecture of neural networks
4.1 Feed-forward networks
Feed-forward ANNs allow signals to travel one way only; from input to output.
There is no feedback (loops) i.e. the output of any layer does not affect that same
layer. Feed-forward ANNs tend to be straight forward networks that associate inputs
with outputs. They are extensively used in pattern recognition. This type of
organization is also referred to as bottom-up or top-down.