ANN is a system loosely modeled based on thehuman brain. ANNs offer a completely different approach to
problem solving, and they are sometimes called the sixth generation of computing. ANNs are powerful techniques to solve many real world problems (El Emary 2006; Huang Mei et al. 2006). They have the ability to learn from experience in order to improve their performance and to adapt themselves to changes in the environment. From the point of view of statistics or econometrics, ANN models are a particular class of nonlinear input–output models. NNs have been established to be superior to the conventional models of linear kind (including regression, univariate time series models,
and multivariate time series transfer function models) and some other non-linear kind. Application of NN does not require the data to meet the assumptions that must otherwise be met in a regression model. ANN adapts to learn the relationship or mapping between input and outputs during the training process. During supervised training, which is used in this study, pairs of input and target data are used. An input is propagated through the ANN, the model output is compared with the target output, and the weights between nodes are updated to minimize the error between simulated and target output. The design of NN architecture (topology) and methods of training, testing, evaluating, and implementing the network is very important. The design of NN architecture consists of the choice of the NN algorithm, the structure (number of layers, and number of neurons in the layers), the input and output functions, and the learning parameters. This research focuses on the back propagation algorithm learning method. The back propagation algorithm seeks to minimize the error term between the output of the neural net and the actual desired output value. The error term is calculated by comparing the net output to the desired output and is then fed back through the network, causing the synaptic weights to be changed in an effort to minimize error. The process is repeated until the error reaches a minimum value (Haykin 1994).