A Priori Power Analysis
Our power calculations were based on similar studies reported in the literature [60–63] and
on results from a previous pilot study [29]. From these sources, effect sizes were observed in
the small to large range (Cohen’s d of 28–1.03) and we noted that the small effect sizes were
for non-central outcome measures (e.g., VO2 max. which had a median effect size of .39).
For variables that are similar to our primary outcomes (e.g., physical activity, activity
adherence), the median effect size was relatively large (d=.69). Given the several design
features of our study, such as regular contacts, no wait-list control to delay intervention
participation, monetary incentives, and overall validity improvements of the measures, we
anticipate that the effect sizes obtained for the intervention will be at least as large as these
rough estimates.We estimated that the sample of 180 participants would be adequate to maintain 80% power
for addressing our study aims. For mixed modeling, power of 80% or greater was expected
to be achieved with effects greater than .44 (a reasonable, clinically meaningful effect), a
conservatively assumed high correlation of .70 among repeated measures (higher
correlations generally yield lower power when other conditions are the same), and attrition
rates as high as 37% (i.e., complete data on at least n=113) (see [64]). With an attrition rate
of 13% (i.e., complete data on n=156), which is about the typical attrition rate reported in
the literature for studies that include regular follow-up contact, the minimum detectable
effect size was .37. These estimated effect sizes are smaller than the effect sizes we
anticipate for the intervention. For structural equation modeling (SEM), our planned models
were expected to provide greater than 80% power for rejecting the null hypothesis of
acceptable fit (RMSEA of .10) in favor of close fit (RMSEA of .05) with the complete data
on as low as 150 participants (> 13% attrition rate) (see [65]).
A Priori Power Analysis
Our power calculations were based on similar studies reported in the literature [60–63] and
on results from a previous pilot study [29]. From these sources, effect sizes were observed in
the small to large range (Cohen’s d of 28–1.03) and we noted that the small effect sizes were
for non-central outcome measures (e.g., VO2 max. which had a median effect size of .39).
For variables that are similar to our primary outcomes (e.g., physical activity, activity
adherence), the median effect size was relatively large (d=.69). Given the several design
features of our study, such as regular contacts, no wait-list control to delay intervention
participation, monetary incentives, and overall validity improvements of the measures, we
anticipate that the effect sizes obtained for the intervention will be at least as large as these
rough estimates.We estimated that the sample of 180 participants would be adequate to maintain 80% power
for addressing our study aims. For mixed modeling, power of 80% or greater was expected
to be achieved with effects greater than .44 (a reasonable, clinically meaningful effect), a
conservatively assumed high correlation of .70 among repeated measures (higher
correlations generally yield lower power when other conditions are the same), and attrition
rates as high as 37% (i.e., complete data on at least n=113) (see [64]). With an attrition rate
of 13% (i.e., complete data on n=156), which is about the typical attrition rate reported in
the literature for studies that include regular follow-up contact, the minimum detectable
effect size was .37. These estimated effect sizes are smaller than the effect sizes we
anticipate for the intervention. For structural equation modeling (SEM), our planned models
were expected to provide greater than 80% power for rejecting the null hypothesis of
acceptable fit (RMSEA of .10) in favor of close fit (RMSEA of .05) with the complete data
on as low as 150 participants (> 13% attrition rate) (see [65]).
การแปล กรุณารอสักครู่..
