SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Dorfman AH, Valliant R. Stat. Public Policy (Phila.) 2022; 9(1): 175-184.

Copyright

(Copyright © 2022, Informa - Taylor and Francis Group)

DOI

10.1080/2330443X.2022.2120137

PMID

unavailable

Abstract

Forensic firearms identification, the determination by a trained firearms examiner as to whether or not bullets or cartridges came from a common weapon, has long been a mainstay in the criminal courts. Reliability of forensic firearms identification has been challenged in the general scientific community, and, in response, several studies have been carried out aimed at showing that firearms examination is accurate, that is, has low error rates. Less studied has been the question of consistency, of whether two examinations of the same bullets or cartridge cases come to the same conclusion, carried out by an examiner on separate occasions--intrarater reliability or repeatability--or by two examiners--interrater reliability or reproducibility.One important study, described in a 2020 Report by the Ames Laboratory-USDOE to the Federal Bureau of Investigation, went beyond considerations of accuracy to investigate firearms examination repeatability and reproducibility. The Report's conclusions were paradoxical. The observed agreement of examiners with themselves or with other examiners appears mediocre. However, the study concluded repeatability and reproducibility are satisfactory, on grounds that the observed agreement exceeds a quantity called the expected agreement. We find that appropriately employing expected agreement as it was intended does not suggest satisfactory repeatability and reproducibility, but the opposite.

Keywords

Expected agreement; Firearms; Forensic science; Kappa index; Observed agreement

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print