SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Elhenawy M, Masoud M, Haworth N, Young K, Rakotonirainy A, Grzebieta R, Williamson A. Transp. Res. F Traffic Psychol. Behav. 2023; 97: 31-43.

Copyright

(Copyright © 2023, Elsevier Publishing)

DOI

10.1016/j.trf.2023.06.016

PMID

unavailable

Abstract

Naturalistic driving data (NDS), collected from fitted vehicles with multiple sensors, including cameras, is a rich source to understand how drivers become distracted. However, detecting the distraction events traditionally required watching NDS videos which is expensive and time-consuming. Adopting state-of-the-art machine learning techniques to automate distraction detection using videos will save effort and money. Reducing/annotating a random sample of the videos and using it to train a machine learning model to carry out the data reduction for the rest of the dataset is possible. This paper investigates the feasibility of using pre-trained deep neural networks and random forest to detect driver distraction in the Australian Naturalistic Driving Study (ANDS) video data. The pre-trained model was fine-tuned using transfer learning; therefore, it can be used to classify (i.e. distracted/non-distracted) successive frames independently. To capture the time dependency between the consecutive images/frame during the training phase, we converted the 1-D output probability of the fine-tuned model into a higher dimension space. The distraction probability of each frame at a time is transformed into a vector by concatenating the distraction output probabilities of all frames within a time window centred at time. Moreover, the vector centred at the time was assigned the actual label of the frame at this time. Then we used Random Forest (RF) classifier to model the non-linear relationship between the probabilities in this new space and the distracted/non-distracted labels. This proposed framework was trained using some ANDS trips and tested using another set of trips. When applied to the dashboard camera, the proposed framework achieved 0.609, 0.218 and 0.325 for the true positive rate (TPR), the false positive rate (FPR), and the precision, respectively. Moreover, the face camera achieved 0.748, 0.344 and 0.651 for TPR, the FPR and the precision, respectively. These promising results suggest the proposed framework can be used routinely in many other data reduction tasks.


Language: en

Keywords

Detection; Driver distraction; Pre-trained models; Transfer learning

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print