Artificial neural networks (ANN) (6) provide an alternative to regression approaches for model development. Like other modeling approaches, ANN can be thought of as a minimization technique in which the goal is to minimize the difference between model output and actual output. However, some of the proposed advantages of ANN over conventional methods are that it automatically allows for nonlinear relationships between predictor and response variables, and incorporates
interaction between variables without requiring additional modeling as in the case of
standard statistical approaches (50). Artificial neural networks models are fitted by
adjusting weights which are analogous to coefficients/parameters in regression modeling. The predictors and responses are repeatedly presented to the network during a process called training and after each passage through the network, internal weights are adjusted so as to minimize the difference between network and actual outputs. In this iterative process, the network leams the relationship between the predictor and response variables so that when it is presented with a new set of inputs (validation), it is capable to predicting the outcome based on this relationship. There are several classes of ANN, each employing different types of architectures. One of the most commonly used architectures is the multilayered network trained through error back-propagation (the back-propagation ANN, BPNN) (6).