The weights are determined to minimize the sum of squared differences between network outputs and its desired outputs. In this study, we use error back propagation, which is probably the most used algorithm to train MLP
[Rumelhart, Hinton and Williams, 1986]. It is basically a gradient descent algorithm of the error computed on a suitable
learning set.