Several training algorithms, viz., (i) gradient descent algorithm with adaptive learning rate; (ii) Fletcher–Reeves conjugate gradient algorithm; (iii) Polak–Ribiére conjugate gradient algorithm; (iv) Powell–Beale conjugate gradient algorithm; (v) Quasi-Newton algorithm with Broyden, Fletcher, Goldfarb, and Shanno (BFGS) update; and (vi) Levenberg–Marquardt algorithm with Bayesian regularization; along with various network architectural parameters, i.e., data partitioning strategy, initial synaptic weights, number of hidden layers, number of neurons in each hidden layer, activation functions, regularization factor, etc., are experimentally investigated to arrive at the best model for predicting the FLMY305.