SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

MacKenzie EJ, Garthe EA, Gibson G. Proc. Am. Assoc. Automot. Med. Annu. Conf. 1978; 22(1): 55-66.

Copyright

(Copyright © 1978, Association for the Advancement of Automotive Medicine)

DOI

unavailable

PMID

unavailable

Abstract

Because of the wide usage and potential value of the AIS and ISS in rating severity of both vehicular and non-vehicular trauma patients, it is essential that three hitherto unexplained but crucial methodological questions be resolved concerning the index:

(1) Is coding AIS from the emergency department encounter sheet as accurate as coding from the more detailed inpatient charts?
(2) What type of person is ideally qualified to use the index? How much clinical background should the coder have for optimal use of the AIS?
(3) Can the Scale be used with high inter- and intra-rater reliability when coding injuries resulting from both vehicular and non-vehicular trauma? What are the implications of coding differences when computing Baker's overall ISS score? In the present study, inpatient charts for 98 trauma admissions to Johns Hopkins Hospital during the period November 1976 to May 1977 were obtained from medical records. Fifty of these patients were involved in motor vehicle accidents. The remaining 48 patients were victims of non-vehicular trauma. Three coders were chosen to participate in the present study. Coder 1 was a research worker with experience in medical abstracting and a sound knowledge of medical terminology; she had no clinical experience. The other two coders were nurses who worked in the Johns Hopkins Adult Emergency Department. To examine comparability of AIS coding from the emergency department encounter sheet with coding from the inpatient record, coder 1 was asked to record and rate the severity of all injuries noted on the emergency department record for the sub-sample of fifty trauma patients. One month later she reviewed and rated injuries noted on the corresponding inpatient charts. To measure inter-rater reliability, all three coders rated injuries from the same 98 inpatient records. Differences in coding between the research worker and the nurses with clinical background were examined. Intra-rater reliability was examined by having the coders review and score a subsample of the charts four months after the initial chart review. The kappa statistic, first developed by Cohen (1960, 1968) and later generalized by Fleiss (1971) to measure agreement among more than two raters, is used to measure the agreement of severity scores obtained from different sources and different coders.

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print