Neural Network Basics
A neural network is a system that takes numeric inputs, performs
computations on these inputs, and outputs one or more numeric
values. When a neural net is designed and trained for a specific
application, it outputs approximately correct values for given inputs.
For example, a net could have inputs representing some easily
measured characteristics of an abalone (a sea animal), such as length,
diameter and weight. The computations performed inside the net
would result in a single number, which is generally close to the age of
the animal (the age of an abalone is harder to determine).
The inspiration for neural nets comes from the structure of the brain.
A brain consists of a large number of cells, referred to as "neurons". A
neuron receives impulses from other neurons through a number of
"dendrites". Depending on the impulses received, a neuron may send
a signal to other neurons, through its single "axon", which connects to
dendrites of other neurons. Like the brain, artificial neural nets
consist of elements, each of which receives a number of inputs, and
generates a single output, where the output is a relatively simple
function of the inputs.
Neural Nets vs. Statistical Methods
Neural nets provide an alternative to more traditional statistical
methods. Like Linear Regression, they are used for function
approximation. Like Discriminant Analysis and Logistic Regression,
they are used for classification. The advantage of neural nets is that
they are capable of modeling extremely complex functions. This
stands in contrast with the traditional linear techniques (Linear
Regression and Linear Discriminant Analysis). Techniques for
optimizing linear models were well known before artificial neural
nets were invented in middle of the 20th century. Effective
algorithms for training neural nets took many years to develop.
However, we now have a range of sophisticated algorithms for neural
net training, making them an attractive alternative to the more
traditional methods.