SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Cai Y, Li D, Zhou X, Mou X. Sensors (Basel) 2018; 18(12): s18124158.

Affiliation

School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China. sunnymou@whut.edu.cn.

Copyright

(Copyright © 2018, MDPI: Multidisciplinary Digital Publishing Institute)

DOI

10.3390/s18124158

PMID

30486408

Abstract

Environment perception is one of the major issues in autonomous driving systems. In particular, effective and robust drivable road region detection still remains a challenge to be addressed for autonomous vehicles in multi-lane roads, intersections and unstructured road environments. In this paper, a computer vision and neural networks-based drivable road region detection approach is proposed for fixed-route autonomous vehicles (e.g., shuttles, buses and other vehicles operating on fixed routes), using a vehicle-mounted camera, route map and real-time vehicle location. The key idea of the proposed approach is to fuse an image with its corresponding local route map to obtain the map-fusion image (MFI) where the information of the image and route map act as complementary to each other. The information of the image can be utilized in road regions with rich features, while local route map acts as critical heuristics that enable robust drivable road region detection in areas without clear lane marking or borders. A neural network model constructed upon the Convolutional Neural Networks (CNNs), namely FCN-VGG16, is utilized to extract the drivable road region from the fused MFI. The proposed approach is validated using real-world driving scenario videos captured by an industrial camera mounted on a testing vehicle. Experiments demonstrate that the proposed approach outperforms the conventional approach which uses non-fused images in terms of detection accuracy and robustness, and it achieves desirable robustness against undesirable illumination conditions and pavement appearance, as well as projection and map-fusion errors.


Language: en

Keywords

FCNs; autonomous vehicles; drivable road region detection; map-fusion image

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print