SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Li W, Wang J, Ren T, Li F, Zhang J, Wu Z. IEEE Trans. Intel. Transp. Syst. 2022; 23(10): 17922-17935.

Copyright

(Copyright © 2022, IEEE (Institute of Electrical and Electronics Engineers))

DOI

10.1109/TITS.2022.3161986

PMID

unavailable

Abstract

For deployment on an embedded processor for distracted driver classification, the model should satisfy the demand for both high accuracy, real-time inference, and limited storage resources. Conventional deep CNN models such as VGG, ResNet, DenseNet, often aim for high accuracy, making their model heavy for an embedded system with limited memory space and computing resources. In contrast, lightweight models are greatly compressed but at a significant sacrifice of accuracy. To bridge this gap, we propose an instance-specific multi-teacher knowledge distillation model (IsMt-KD) to learn more accurate, speedy, and lightweight CNNs for distracted driver posture classification. Specifically, in multi-teacher knowledge distillation, most of the current approaches either randomly select a teacher model and apply the prediction of such teacher model as the soft-label or allocate an equal weight to every teacher model and average all the predictions of the teachers as the soft label. In this paper, we observe that, when facing the same instance, the outputs of different teachers vary greatly, in which some teachers can predict it right whereas the others may give pretty high probabilities to the irrelevant classes. Thus, it is inappropriate to set fixed weights or the same weights for teachers. To this end, a simple yet effective instance-specific teacher grading module is designed to dynamically assign weights to teacher models based on individual instances. In this way, we can dynamically distill the knowledge from multiple teachers by considering both instance-specific high-level and instance-specific intermediate-level information. Our extensive experimental results on AUC and StateFarm datasets, and our implementation on edge hardware platforms including HUAWEI MediaPad c5 and Nvidia Jetson TX2, verify the effectiveness and feasibility of our approach.


Language: en

Keywords

Accidents; Biomedical monitoring; Brain modeling; Computational modeling; instance-specific; lightweight; Multi-teacher knowledge distillation; Real-time systems; teacher grading; Vehicles; Wheels

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print