SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Shi H, Chen L, Wang X, Wang G, Wang Q. Sustainability (Basel) 2022; 14(1): e508.

Copyright

(Copyright © 2022, MDPI: Multidisciplinary Digital Publishing Institute)

DOI

10.3390/su14010508

PMID

unavailable

Abstract

Driver distraction has become a leading cause of traffic crashes. Visual distraction has the most direct impact on driving safety among various driver distractions. If the driver's line of sight deviates from the road in front, there will be a high probability of visual distraction. A nonintrusive and real-time classification method for driver's gaze region is proposed. A Multi-Task Convolutional Neural Network (MTCNN) face detector is used to collect the driver's face image, and the driver's gaze direction can be detected with a full-face appearance-based gaze estimation method. The driver's gaze region is classified by the model trained through the machine learning algorithms such as Support Vector Machines (SVM), Random Forest (RF), and K-Nearest Neighbors (KNN). The simulated experiment and the real vehicle experiment were conducted to test the method. The results show that it has good performance on gaze region classification and strong robustness to complex environments. The models in this paper are all lightweight networks, which can meet the accuracy and speed requirements for the tasks. The method can be a good help for further exploring the visual distraction state level and exert an influence on the research of driving behavior.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print