SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Winship C, Zhuo X. J. Quant. Criminol. 2020; 36(2): 329-346.

Copyright

(Copyright © 2020, Holtzbrinck Springer Nature Publishing Group)

DOI

10.1007/s10940-018-9387-8

PMID

unavailable

Abstract

A key issue is how to interpret t-statistics when publication bias is present. In this paper we propose a set of rough rules of thumb to assist readers to interpret t-values in published results under publication bias. Unlike most previous methods that utilize collections of studies, our approach evaluates the strength of evidence under publication bias when there is only a single study.

Methods

We first re-interpret t-statistics in a one-tailed hypothesis test in terms of their associated p-values when there is extreme publication bias, that is, when no null findings are published. We then consider the consequences of different degrees of publication bias. We show that under even moderate levels of publication bias adjusting one's p-values to insure Type I error rates of either 0.05 or 0.01 result in far higher t-values than those in a conventional t-statistics table. Under a conservative assumption that publication bias occurs 20 percent of the time, with a one-tailed test at a significance level of 0.05, a t-value equal or greater than 2.311 is needed. For a two-tailed test the appropriate standard would be equal or above 2.766. Both cutoffs are far higher than the traditional ones of 1.645 and 1.96. To achieve a p-value less than 0.01, the adjusted t-values would be 2.865 (one-tail) and 3.254 (two-tail), as opposed to the traditional values 2.326 (one-tail) and 2.576 (two-tail). We illustrate our approach by applying it to evaluate the hypothesis tests in recent issues of Criminology and Journal of Quantitative Criminology (JQC).

Conclusion

Under publication bias much higher t-values are needed to restore the intended p-value. By comparing the observed test scores with the adjusted critical values, this paper provides a rough rule of thumb for readers to evaluate the degree to which a reported positive result in a single publication reflects a true positive effect. Further measures to increase the reporting of robust null findings are needed to ameliorate the issue of publication bias.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print