When MLP has too many neurons in the hidden layer, some of them have output weights near zero, meaning that they are not important in the forming of the fitted curve. Therefore, regularization can be seen as a neuron selection method: we need not determine the exact number of hidden neurons, regularization will do it for us. The resulting network has good generalization properties. Thus, we believe that MLP trained on the training set will operate well for the test data. In our models we used five neurons and 100 training epochs.