SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Mauri A, Khemmar R, Decoux B, Haddad M, Boutteau R. J. Imaging 2021; 7(8): e145.

Copyright

(Copyright © 2021, MDPI: Multidisciplinary Digital Publications Institute)

DOI

10.3390/jimaging7080145

PMID

unavailable

Abstract

For smart mobility, autonomous vehicles, and advanced driver-assistance systems (ADASs), perception of the environment is an important task in scene analysis and understanding. Better perception of the environment allows for enhanced decision making, which, in turn, enables very high-precision actions. To this end, we introduce in this work a new real-time deep learning approach for 3D multi-object detection for smart mobility not only on roads, but also on railways. To obtain the 3D bounding boxes of the objects, we modified a proven real-time 2D detector, YOLOv3, to predict 3D object localization, object dimensions, and object orientation. Our method has been evaluated on KITTI's road dataset as well as on our own hybrid virtual road/rail dataset acquired from the video game Grand Theft Auto (GTA) V. The evaluation of our method on these two datasets shows good accuracy, but more importantly that it can be used in real-time conditions, in road and rail traffic environments. Through our experimental results, we also show the importance of the accuracy of prediction of the regions of interest (RoIs) used in the estimation of 3D bounding box parameters.


Language: en

Keywords

deep learning; object detection; 3D bounding box estimation; 3D multi-object detection; distance estimation; localization; multi-modal dataset; object dimensions; object orientation; smart mobility

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print