SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Naudé AJ, Myburgh HC. Sensors (Basel) 2023; 23(17): e7355.

Copyright

(Copyright © 2023, MDPI: Multidisciplinary Digital Publishing Institute)

DOI

10.3390/s23177355

PMID

37687809

Abstract

Road scene understanding, as a field of research, has attracted increasing attention in recent years. The development of road scene understanding capabilities that are applicable to real-world road scenarios has seen numerous complications. This has largely been due to the cost and complexity of achieving human-level scene understanding, at which successful segmentation of road scene elements can be achieved with a mean intersection over union score close to 1.0. There is a need for more of a unified approach to road scene segmentation for use in self-driving systems. Previous works have demonstrated how deep learning methods can be combined to improve the segmentation and perception performance of road scene understanding systems. This paper proposes a novel segmentation system that uses fully connected networks, attention mechanisms, and multiple-input data stream fusion to improve segmentation performance.

RESULTS show comparable performance compared to previous works, with a mean intersection over union of 87.4% on the Cityscapes dataset.


Language: en

Keywords

data fusion; dual attention mechanisms; road scene understanding; scene segmentation; self-driving

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print