Decision makers need timely and credible information about the effectiveness of behavioral health interventions. Online evidence-based program registers (EBPRs) have been developed to address this need. However, the methods by which these registers determine programs and practices as being “evidence-based” has not been investigated in detail. This paper examines the evidentiary criteria EBPRs use to rate programs and the implications for how different registers rate the same programs. Although the registers tend to employ a standard Campbellian hierarchy of evidence to assess evaluation results, there is also considerable disagreement among the registers about what constitutes an adequate research design and sufficient data for designating a program as evidence-based. Additionally, differences exist in how registers report findings of “no effect,” which may deprive users of important information. Of all programs on the 15 registers that rate individual programs, 79% appear on only one register. Among a random sample of 100 programs rated by more than one register, 42% were inconsistently rated by the multiple registers to some degree.
Keywords
Evidence-based programs; Evidence-based practices; Best practices; Evidence-based program registers; Standards of evidence; Hierarchies of evidence; Treatment effectiveness; Behavioral health