Training of the neural network was carried out using the backpropagation gradient descent algorithm, one of the most commonly used neural network training approaches (Bishop, 1995). Repeated exposure to randomly-selected members of the training dataset with a learning rate of 0.05 and a momentum rate of 1 was found to give best results at between 5000 and 10,000 training iterations, through trial and error. A training regime of 10,000 steps was therefore adopted. Following training, the neural network was used to model values for the single data point that had been left out of the training dataset, with subsequent adjustment of the predicted values to fit within the appropriate minimum–maximum range of each parameter in the dataset.