Lange and Eggert (2014) also criticised Hagger and
Chatzisarantis’s (2013) findings on the basis of an ‘incredibility index’
analysis that contrasts number of statistically significant findings
in reported studies against total power of the reported studies. Their
analysis showed that the probability of not obtaining a pattern of
results as reported by Hagger and Chatzisarantis (2013) was 98%.
However, Lange and Eggert (2014) used a weighted average effect
size (meta-analytic effect size; Hagger et al., 2010) to calculate the
incredibility index. They omitted to report incredibility indexes that
were calculated on the basis of observed or averaged effect sizes
(see Schimmack, 2012). We re-ran the incredibility index analysis
using the observed and averaged effect sizes from the individual
studies in Hagger and Chatzisarantis’s (2013) article and found the
incredibility index to be as low as 78% (see Table 1). The reason for
this difference is that the average power of studies that is calculated
on the basis of individual effect sizes or averaged effect size
is larger than the average power that is calculated on the basis of
theweighted average effect size. These incredibility indexes are lower
than those reported by Lange and Eggert (2014) and suggest that
their dismissal of glucose effects on self-control performance is an
overstatement.
In addition, a more relevant analysis would be to analyze the data
to evaluate whether the effectswere due to a ‘small study’ bias, which
reflects the tendency for smaller studies to report larger effect sizes.