The inability of assessment center (AC) researchers to find admissible
solutions for confirmatory factor analytic (CFA) models that include dimensions
has led some to conclude that ACs do not measure dimensions
at all. This study investigated whether increasing the indicator–factor
ratio facilitates the achievement of convergent and admissible CFA solutions
in 2 independent ACs. Results revealed that, when models specify
multiple behavioral checklist items as manifest indicators of each latent
dimension, all of the AC CFAmodels tested were identified and returned
proper solutions. When armed with the ability to undertake a full set
of model comparisons using model fit rather than solution convergence
and admissibility as comparative criteria, we found clear evidence for
modest dimension effects. These results suggest that the frequent failure
to find dimensions in models of the internal structure of ACs is a
methodological artifact and that one approach to increase the likelihood
for reaching a proper solution is to increase the number of manifest indicators
for each dimension factor. In addition, across exercise dimension
ratings and the overall assessment rating were both strongly correlated
with dimension and exercise factors, indicating that regardless of how
an AC is scored, exercise variance will continue to play a key role in the
scoring of ACs.