SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Deng T, Wu Y. PLoS One 2022; 17(3): e0264551.

Copyright

(Copyright © 2022, Public Library of Science)

DOI

10.1371/journal.pone.0264551

PMID

35245342

Abstract

Aiming at vehicle and lane detections on road scene, this paper proposes a vehicle and lane line joint detection method suitable for car following scenes. This method uses the codec structure and multi-task ideas, shares the feature extraction network and feature enhancement and fusion module. Both ASPP (Atrous Spatial Pyramid Pooling) and FPN (Feature Pyramid Networks) are employed to improve the feature extraction ability and real-time of MobileNetV3, the attention mechanism CBAM (Convolutional Block Attention Module) is introduced into YOLOv4, an asymmetric network architecture of "more encoding-less decoding" is designed for semantic pixel-wise segmentation network. The proposed model employed improved MobileNetV3 as feature ex-traction block, and the YOLOv4-CBAM and Asymmetric SegNet as branches to detect vehicles and lane lines, respectively. The model is trained and tested on the BDD100K data set, and is also tested on the KITTI data set and Chongqing road images, and focuses on the detection effect in the car following scene. The experimental results show that the proposed model surpasses the YOLOv4 by a large margin of +1.1 AP50, +0.9 Recall, +0.7 F1 and +0.3 Precision, and surpasses the SegNet by a large margin of +1.2 IoU on BDD100k. At the same time, the detection speed is 1.7 times and 3.2 times of YOLOv4 and SegNet, respectively. It fully proves the feasibility and effectiveness of the improved method.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print