Results
Selection of ANN models
A rapid back-propagation learning algorithm (Battiti, 1992)
was used in all ANN models and architectures with different
numbers of hidden neurons explored to select the
best architecture for the ANN models. The number of
iterations was controlled to prevent the network from
becoming overtrained. Overtraining was identified by an
early stopping technique, determined as the point where
the classification error in the validation data set started
to increase with increasing iterations. The relationship
between the number of iterations and the classification
error in validation data sets is given in Fig. 1.
Four to five hidden neurons and a sigmoid activation
function for both layers with 1500–2500 iterations offered
the best architecture. When data from all Australian field
sites were used in training, the ANN model was 60%
accurate in predicting severity classes for the South American
sites, and the percentage of error was 100% when
predicting severity for the Caquetá site. Similarly, the
model using data from all South American sites predicted
severity classes for the Australian sites poorly (data not
shown). Stylosanthes has only recently been introduced to
Caquetá and no C. gloeosporioides inoculum or anthracnose
symptoms were recorded at this site during the 1994–
96 period. Therefore data from Caquetá were excluded,
and all further analysis was conducted only on six Australian
and South American sites. This increased the prediction
success of the Australian model to >77% and that
of the South American model to >76%. The classification
errors for the various ANN models with three to 10 hidden
neurons developed with or without data from the Caquetá
site are given in Fig. 2.