SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Smith BM, Dyer CR, Chitturi MV, Lee JD. Transp. Res. Rec. 2017; 2663: 48-56.

Copyright

(Copyright © 2017, Transportation Research Board, National Research Council, National Academy of Sciences USA, Publisher SAGE Publishing)

DOI

10.3141/2663-07

PMID

unavailable

Abstract

Driver distraction represents a major safety problem in the United States. Naturalistic driving data, such as SHRP 2 Naturalistic Driving Study (NDS) data, provide a new window into driver behavior that promises a deeper understanding than was previously possible. Unfortunately, the current practice of manual coding is infeasible for large data sets such as SHRP 2 NDS, which contains millions of hours of video. Computer vision algorithms have the potential to automatically code SHRP 2 NDS videos. However, existing algorithms are brittle in the presence of challenges such as low video quality, underexposure and overexposure, driver occlusion, nonfrontal faces, and unpredictable and significant illumination changes, which are all substantially present in SHRP 2 NDS videos. This paper presents and evaluates algorithms developed to quantify high-level features pertinent to driver distraction and engagement in challenging videos like those in SHRP 2 NDS. Specifically, a novel three-stage video analysis system is presented for tracking head position and estimating head pose and eye and mouth states. The accuracy of the new head pose estimation module is competitive with the state of the art on publicly available data sets and produces good qualitative results on SHRP 2 NDS videos.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print