SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Senior M, Fanshawe T, Fazel M, Fazel S. JCPP Adv. 2021; 1(3): e12034.

Copyright

(Copyright © 2021, Association for Child and Adolescent Mental Health)

DOI

10.1002/jcv2.12034

PMID

unavailable

Abstract

Background There has been a rapid growth in the publication of new prediction models relevant to child and adolescent mental health. However, before their implementation into clinical services, it is necessary to appraise the quality of their methods and reporting. We conducted a systematic review of new prediction models in child and adolescent mental health, and examined their development and validation.

METHOD We searched five databases for studies developing or validating multivariable prediction models for individuals aged 18 years old or younger from 1 January 2018 to 18 February 2021. Quality of reporting was assessed using the Transparent Reporting of a multivariable prediction models for Individual Prognosis Or Diagnosis checklist, and quality of methodology using items based on expert guidance and the PROBAST tool.

RESULTS We identified 100 eligible studies: 41 developing a new prediction model, 48 validating an existing model and 11 that included both development and validation. Most publications (k = 75) reported a model discrimination measure, while 26 investigations reported calibration. Of 52 new prediction models, six (12%) were for suicidal outcomes, 18 (35%) for future diagnosis, five (10%) for child maltreatment. Other outcomes included violence, crime, and functional outcomes. Eleven new models (21%) were developed for use in high-risk populations. Of development studies, around a third were sufficiently statistically powered (k = 16%, 31%), while this was lower for validation investigations (k = 12, 25%). In terms of performance, the discrimination (as measured by the C-statistic) for new models ranged from 0.57 for a tool predicting ADHD diagnosis in an external validation sample to 0.99 for a machine learning model predicting foster care permanency.

CONCLUSIONS Although some tools have recently been developed for child and adolescent mental health for prognosis and child maltreatment, none can be currently recommended for clinical practice due to a combination of methodological limitations and poor model performance. New work needs to use ensure sufficient sample sizes, representative samples, and testing of model calibration.


Language: en

Keywords

child protection; justice; multivariable models; risk assessment; risk prediction; self-harm

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print