We study the issue of error diversity in ensembles of neural networks. In ensembles of
regression estimators, the measurement of diversity can be formalised as the Bias-Variance-
Covariance decomposition. In ensembles of classifiers, there is no neat theory in the lit-
erature to date. Our objective is to understand how to precisely define, measure, and
create diverse errors for both cases. As a focal point we study one algorithm, Negative Cor-
relation (NC) Learning which claimed, and showed empirical evidence, to enforce useful
error diversity, creating neural network ensembles with very competitive performance on
both classification and regression problems. With the lack of a solid understanding of its
dynamics, we engage in a theoretical and empirical investigation.