Fritzsche Ergonomics Risk Assessment with DHM
simulated with Delmia V5 human modeling software
SAFEWORK using a company-specific version of the
AAWS for assessing static postures, action forces, manual
material handling, and extra strains.
Results generally demonstrated that AAWS risk assessment
in real-life tasks and in DHM simulations revealed
similar outcomes. AAWS total scores in real-life
tasks and corresponding DHM simulations correlated
fairly high. Assuming that ergonomists’ assessment in
real life was correct—the high inter-rater reliability for
AAWS scores in real-life tasks shows that both experts
agree on their assessment results relatively well—DHM
simulations provide good estimations of overall workload
in real-life tasks when using AAWS as a comprehensive
method for ergonomics risk assessment.
There were also some relevant differences between
assessments in real-life tasks and in DHM simulations.
Only 60% of the assembly tasks were classified as the
same risk category (green, yellow, or red) in real life
and simulated. This finding was reflected by moderate
correlation measures. More importantly, AAWS
total scores were, on average, almost 5 points higher
in DHM simulations than in real-life tasks because
both observers assigned higher scores on static postures
and on extra strains in DHM simulations compared
to real life. Similarly, the average difference of paired
AAWS scores between real-life tasks and DHM simulations
was quite high for those two measures. Moreover,
scores on action forces also showed a large difference
in paired AWWS scores and a relatively low correlation
between real-life tasks and DHM simulations. How
can these differences and the moderate correlations be
explained?
Although ergonomists’ agreement on AAWS scores
in real-life tasks was good, thereby confirming that
AAWS is an objective tool, the inter-rater reliability
on AAWS total scores was lower for DHM simulations,
which is in line with the results of Lamkull and ¨
colleagues (2007). This was predominately caused by
the poor level of agreement in scores on action forces
which clearly revealed the lowest inter-rater reliability
for DHM simulations being the only measure on
which ergonomists’ ratings were not significantly related.
These inter-rater differences probably occurred
because such kinds of workload (e.g., finger-forces for
clipping) could not be directly observed in DHM simulations.
Hence, ergonomists’ assessments were rather
based on their experience, which was verified by both
experts in a follow-up interview. This also explains
why scores on action forces show the lowest correlation