SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Berviller Y, Ansarnia MS, Tisserand E, Schweitzer P, Trémeau A. Sensors (Basel) 2023; 23(5): e2637.

Copyright

(Copyright © 2023, MDPI: Multidisciplinary Digital Publishing Institute)

DOI

10.3390/s23052637

PMID

36904841

PMCID

PMC10007371

Abstract

In this paper, we present a deep learning processing flow aimed at Advanced Driving Assistance Systems (ADASs) for urban road users. We use a fine analysis of the optical setup of a fisheye camera and present a detailed procedure to obtain Global Navigation Satellite System (GNSS) coordinates along with the speed of the moving objects. The camera to world transform incorporates the lens distortion function. YOLOv4, re-trained with ortho-photographic fisheye images, provides road user detection. All the information extracted from the image by our system represents a small payload and can easily be broadcast to the road users. The results show that our system is able to properly classify and localize the detected objects in real time, even in low-light-illumination conditions. For an effective observation area of 20 m × 50 m, the error of the localization is in the order of one meter. Although an estimation of the velocities of the detected objects is carried out by offline processing with the FlowNet2 algorithm, the accuracy is quite good, with an error below one meter per second for urban speed range (0 to 15 m/s). Moreover, the almost ortho-photographic configuration of the imaging system ensures that the anonymity of all street users is guaranteed.


Language: en

Keywords

deep learning; ADAS; camera to world transform; I2V; real time

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print