2.3 Artificial Neural Network
The Artificial Neural Networks (ANNs) can be trained on the basis of experimental data [14,15], as highlighted
in other papers [16-20]; the theory on which an ANN based on is already described in a previous work [13]. A twolayers feedforward Neural Network was trained by using Matlab programming language; a sigmoidal function for
the hidden layer and a linear function for the output one were chosen as transfer function (T). For the hidden layer,
the sigmoidal function was chosen because it allows to simplify the gradient calculation of the Error function (E)
and to reduce the computational time of the training. The defined pattern is shown in figure 1. The LevenbergMarquardt algorithm was used for the training of Networks. It is a fast method used for minimizing the mean square
error in the feedforward Neural Network.
Preliminary simulations were carried out by varying the number of neurons used in the hidden layer. For all the
simulations, the 70% of data was used for the training, while the remaining part was used for the validation and test
of the Network (15% and 15% respectively) according to previous work [17]. The regression values and the mean
error returned by the trained Networks were considered as control parameters; in table 2 the chosen control parameters
only for the best trained Neural Networks are shown. The best ANN was chosen: it was obtained by using 41
neurons that has the higher training, test, and global regression values and the smallest mean error and standard
deviation. In Figure 2 the comparison between the PMV calculated using the Artificial Neural Networkand the heat
balance approach with respect to questionnaires data is shown; the ANN' results are better correlated with
questionnaires data than the one calculated with the Fanger static model approach. The Regression line is more
similar to the bisecting line, which represent a perfect correlation (PMVq=PMVANN); the regression coefficient is
also higher (R2ANN=0.57; R2 anger=0.23).