Article Coding
Articles were coded on 10 dimensions of interest (country, sample, target audience, formative research, use of theory, campaign channels=components, campaign slo- gan, message exposure, evaluation design, outcome measures) by two independent coders. After each article was coded, the coders and the first author met to compare the coders’ work and discuss any discrepancies that were present. Intercoder reliability was calculated for each characteristic that was coded. Percent agreement was calculated by dividing the number of agreed-upon coding instances by the total, and was calculated for each coding category. For example, in the case of the target audience category, the coders agreed on 35 out of the 37 articles, or 95% agreement. Cohen’s (1960) kappa for intercoder reliability, which corrects for chance categoriza- tions, also was calculated. Percent agreement ranged from a low of 89% to a high of 100%, with a mean percent agreement of 95%. Cohen’s kappa ranged from a low of .78 to a high of 1.0, with a mean kappa of .90. These figures indicated very good agreement among the coders. All discrepancies between coders were resolved through discussion between the two coders and the first author.