SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Gorman DM, Huber JC. Eval. Rev. 2009; 33(4): 396-414.

Copyright

(Copyright © 2009, SAGE Publishing)

DOI

10.1177/0193841X09334711

PMID

unavailable

Abstract

As in many areas of public policy, the idea of evidence-based practice has been enthusiastically embraced by the field of drug prevention during the last decade. A key component of this move toward evidence-based practice has been the production of 'best practice' lists of approved drug prevention programs. It is claimed by those who develop these lists that there exists a body of scientific evaluation research that demonstrates the efficacy of the programs they recommend. Prevention practitioners, they argue, should start to use these programs and abandon those interventions for which no such scientific evidence exists. The Drug Abuse Resistance Education (DARE) program is probably the most well-known drug prevention intervention said to be unsupported by empirical evidence.

Against the almost overwhelming support for the development of best practice lists in the drug prevention field, there has emerged a small critical literature focused on both the methods and criteria used to select interventions (e.g., Gorman 2002; Petrosino 2003) and the quality of the evaluation research associated with those programs that appear most often on the lists. With regard to the latter, it has been argued that evaluations of many of the most well-known and advocated-for drug prevention programs use data analysis and presentation practices that serve to verify the hypothesis that the program 'works' rather than to critically test this hypothesis. These practices include multiple subgroup analysis, post hoc sample refinement, use of one-tailed significance tests, and changes in the way that outcome variables are constructed across publications from the same evaluation. The use of such practices raises concern that it is the data analysis techniques used to generate 'evidence,' rather than the actual interventions, which distinguishes 'evidence-based' programs from their 'unproven' counterparts.

The potential for such a problem to occur in the identification of best practices would be limited were stringent criteria used to evaluate a program’s effects on behavioral outcomes. For the most part, the criteria used to establish the evidence base of programs selected by best practice lists has focused on three methodological issues: the quality of the study design used to evaluate the program (e.g., whether random allocation to study conditions was used), the types of measures used to assess outcomes (e.g., whether they are reliable and valid), and the extent to which the study was successfully implemented (e.g., final sample size and attrition). Absent from these criteria is any assessment of data analysis practices. Moreover, those criteria that pertain to program effects on behavioral outcomes are easily met because they generally require demonstration of just a single statistically significant effect.

In the most detailed analysis of this issue conducted so far, Gandhi et al. (Gandhi, A. G., E. Murphy-Graham, A. Petrosino, S. S. Chrismer, and C. H. Weiss. 2007. The devil is in the details: Examining the evidence for 'proven' school-based drug abuse prevention program. Evaluation Review 31:43-74.) reviewed the evidence pertaining to five programs that were most often recommended in seven widely used best practice lists. They found that for four of the programs, there was very limited evidence demonstrating their effectiveness in reducing substance use but that each program met the inclusion criteria of the lists on which it appeared because these allowed for isolated statistically significant effects from just one or two studies.

This study explores the possibility that any drug prevention program might be considered 'evidence-based' given the use of data analysis procedures that optimize the chance of producing statistically significant results by reanalyzing data from a Drug Abuse Resistance Education (DARE) program evaluation. The analysis produced a number of statistically significant differences between the DARE and control conditions on alcohol and marijuana use measures. Many of these differences occurred at cutoff points on the assessment scales for which post hoc meaningful labels were created. Our results are compared to those from evaluations of programs that appear on evidence-based drug prevention lists.

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print