SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Orsi R, Drury IJ, Mackert MJ. Child. Youth Serv. Rev. 2014; 43: 58-66.

Copyright

(Copyright © 2014, Elsevier Publishing)

DOI

10.1016/j.childyouth.2014.04.016

PMID

unavailable

Abstract

Child protective service caseworkers need validated instruments to assist them in assessing safety and risk factors for child maltreatment. The literature provides growing evidence that actuarial risk assessments can be valid tools for classifying families according to risk of future maltreatment. However, compared to validity, we know less about the reliability of both whole assessments and individual items. In this study we tested interrater reliability for 108 individual risk and safety items. We used 31 realistic case vignettes for testing. Each item was completed six times for each vignette. Fifty-four caseworkers and supervisors participated in rating, generating a total of 20,088 ratings for analysis. To determine item reliability, we used measures of prevalence and percentage agreement and the Fleiss's kappa statistic.

RESULTS show that interrater reliability varies widely from item to item. Items with higher prevalence and items documenting demographics, current CPS system involvement, substance abuse, or mental health issues tend to be most reliable. We provide an overview of the testing process, which is replicable in other contexts. We also discuss implications for child protective services practice and for developing or revising risk and safety assessment instruments.

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print