SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Stuart EA, Crifasi C, McCourt A, Vernick JS, Webster D. Am. J. Public Health 2017; 107(8): e26.

Affiliation

Johns Hopkins Bloomberg School of Public Health, Baltimore, MD.

Copyright

(Copyright © 2017, American Public Health Association)

DOI

10.2105/AJPH.2017.303890

PMID

28700289

Abstract

Deriving valid estimates of the effects of state or federal policies is challenging but crucial for protecting the public’s health. Unfortunately, we are noticing a trend in which authors are attempting to estimate associations between policies and gun violence using inappropriate methodologies. In their March 2017 AJPH article, Anestis et al. used analysis of covariance (essentially, regression) to examine the associations of four different firearm laws with changes in states’ suicide rates from 2013 to 2014. The presence of laws requiring waiting periods and background checks for virtually all handgun purchases was linked to lower suicide rates in 2014 than in 2013. Changes in suicide rates were not related to states’ restrictions on open carrying of firearms or safe gun storage requirements.

A limitation of the Anestis et al. study is the lack of variation across time in the state laws of interest; the authors noted that no states changed the laws of interest during the study period. (Colorado and Delaware did extend background check requirements to sales by private gun owners in 2013.) The authors made putative causal inferences about the laws but did not explain how a variable that is static would produce temporal changes in suicide rates, as any protective effects should have been operating in both 2013 and 2014. The study design also made it impossible to control for prelaw characteristics of states (e.g., differences among states that do and do not implement the laws of interest). Although Anestis et al. controlled for a limited set of factors, they were, in fact, “posttreatment” variables, given that they were measured after the states already had the law of interest in place. Controlling for posttreatment variables, such as suicide rates in 2013, could have biased estimates of policy effects.

As noted, the authors’ study design was also vulnerable to omitted variable bias because it did not attend to similarities or differences in states that did or did not have the policies of interest in place. A stronger causal design would have ensured that the states that implemented the law were similar to those that did not on key outcome predictors, including the outcome in the baseline (prelaw) time period. Methods such as synthetic control approaches, which are designed to ensure that the intervention and comparison groups (e.g., states) are similar during the baseline prelaw period, are becoming more common and can provide much more valid estimates of the effects of policy changes.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print