SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Song C, Wu J, Zhu L, Zhang M, Ling H. IEEE Trans. Intel. Transp. Syst. 2022; 23(4): 3244-3255.

Copyright

(Copyright © 2022, IEEE (Institute of Electrical and Electronics Engineers))

DOI

10.1109/TITS.2020.3033569

PMID

unavailable

Abstract

Due to recent advances in learning-based semantic segmentation, road scene parsing can usually achieve satisfactory results under normal illumination conditions. However, training a robust model for parsing nighttime road scenes is still very challenging, especially when semantic labels of training samples are absent. In this paper, we propose a convolutional neural network (CNN)-based method for parsing nighttime road scenes in an unsupervised manner. The proposed system includes an appearance transferring module and a segmentation module, which are coupled together and learned in an end-to-end fashion. The appearance transferring module aims to transfer unlabeled images acquired during both daytime and nighttime into a shared latent feature space that encodes the image content of both scenes at the semantic level. Then, the segmentation module is used to map the feature to its corresponding semantic labels. To better evaluate the proposed model, we also construct a new semantic segmentation dataset including 1,566 nighttime images. The extensive experimental results on the proposed benchmark illustrate that the proposed model achieves significant improvement compared with the baselines as well as a recently released system.


Language: en

Keywords

Annotations; Feature extraction; generative adversarial network; Image segmentation; Lighting; night images; Roads; semantic segmentation; Semantics; Task analysis; Transfer learning

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print