Neural Networks
An ANN is a mathematical model whose operating principle is based on biological neural networks (Haykin, 1999). The ANN architecture comprises a series of interconnected layered neurons through which inputs are processed. These inputs values are multiplied by the synaptic weights, which represent the strength of the neural connections. Figure 1 shows a typical feedforward ANN structure containing an input, hidden, and output layer. This configuration is very popular for function approximation in systems where no time-dependent relationship exists among the network inputs. Increasing the size of the hidden layer allows for more intricate function fitting of nonlinear processes; however, overfitting of training data is undesirable when good generalization abilities are needed (Demuth et al., 2010). Many methods exist for improving generalization such as data filtering, feedback elements, regularization, and network reduction. Reducing the number or neurons in the hidden layer is an effective method of improving generalization because small networks do not have the capability of overfitting the training data. The synaptic weights are configured during back propagation training (Hecht-Nielsen, 1989). Once trained, a SANN has no feedback elements and contains no delays.