2. Models and methods
2.1. ANN ensemble modeling
The ANN consists of a very simple and highly interconnected
processor called a neuron. A neuron is an
information-processing unit that is fundamental to the operation
of a neural network, and consists of a weight and an
activation function (Fig. 1). The weights are the most important
parameters acting as the memory of ANN, and the activation
function provides nonlinear mapping potential with the
network. The manner in which the neurons of ANNs are
structured determines the architecture of ANNs (Haykin,
1999). In general, there are three fundamentally different
classes of network architecture. The first is a single-layer
feedforward network, without hidden layers. The second is a
multilayer feedforward network, with more than one hidden
layer. The third is a recurrent neural network, with at least one
feedback loop. In this study, the multilayer feedforward neural
network (MFNN) with one hidden layer was used, because it
is able to approximate most of the nonlinear functions
demanded by practice (Mulia et al., 2013).
The weight parameters on the links between neurons are
determined by the training algorithm. The most common and
standard algorithm is the backpropagation training algorithm,
the central idea of which is that the errors for the neurons of
the hidden layer are determined by back-propagation of the
error of the neurons of the output layer, as shown in Fig. 1.
There are a number of variations in backpropagation training
algorithms on the basic algorithm that are based on other
standard optimization techniques, such as the steepest descent
algorithm, conjugate gradient algorithm, and Newton's
method. Among various backpropagation methods, the LevenbergeMarquardt
(LM) algorithm has been very successfully
applied to the training of ANN to predict streamflow and water