SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Zhou Z, Dong X, Li Z, Yu K, Ding C, Yang Y. IEEE Trans. Intel. Transp. Syst. 2022; 23(10): 19772-19781.

Copyright

(Copyright © 2022, IEEE (Institute of Electrical and Electronics Engineers))

DOI

10.1109/TITS.2022.3147826

PMID

unavailable

Abstract

In the Vehicular Ad hoc Networks (VANET) environment, recognizing traffic accident events in the driving videos captured by vehicle-mounted cameras is an essential task. Generally, traffic accidents have a short duration in driving videos, and the backgrounds of driving videos are dynamic and complex. These make traffic accident detection quite challenging. To effectively and efficiently detect accidents from the driving videos, we propose an accident detection approach based on spatio-temporal feature encoding with a multilayer neural network. Specifically, the multilayer neural network is used to encode the temporal features of video for clustering the video frames. From the obtained frame clusters, we detect the border frames as the potential accident frames. Then, we capture and encode the spatial relationships of the objects detected from these potential accident frames to confirm whether these frames are accident frames. The extensive experiments demonstrate that the proposed approach achieves promising detection accuracy and efficiency for traffic accident detection, and meets the real-time detection requirement in the VANET environment.


Language: en

Keywords

Accidents; Anomaly detection; Encoding; Feature extraction; Neural network; Real-time systems; security communication; traffic accident detection; traffic safety; VANETs; Vehicular ad hoc networks; Videos

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print