The holdout validation is often used to estimate the generalization capability
of a model. This method randomly partitions the data into training and test
subsets. The former subset is used to fit the model (typically with 2/3 of the
data), while the latter (with the remaining 1/3) is used to compute the estimate.
A more robust estimation procedure is the k-fold cross-validation [8], where the
data is divided into k partitions of equal size. One subset is tested each time
and the remaining data are used for fitting the model. The process is repeated
sequentially until all subsets have been tested. Therefore, under this scheme, all
data are used for training and testing. However, this method requires around k
times more computation, since k models are fitted. The validation method will
be applied several runs and statistical confidence will be given by the t-student
test at the 95% confidence level