ANNs are flexible and nonparametric modeling tools that can
perform any complex function mapping with arbitrarily desired
accuracy (Gennaro et al., 2013; Saufie et al., 2013). Modeling
with ANN covers a learning (training, validation) and a testing
process using historical data by determining nonlinear relationships
between the variables in input and output data sets. The
basic structure of the ANNs are composed of input and output
neurons with weights of interconnection placed in different
layers, and internal transfer functions of them. The most common
types of ANNs used in forecasting studies are multilayer
perceptron neural networks (MLP-ANN), which are constructed
with three layers: input, hidden, and output layers (as shown in
Figure 3). This class of networks are usually interconnected in a
feed-forward way. They can use a variety of learning techniques
such as back-propagation, conjugate gradient, and generalized
delta rule, whereas the most popular one is error back-propagation
method. Internal transfer function to compute an output y
from various inputs xi expressed as follows: