SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Villalonga G, van de Weijer J, López AM. Sensors (Basel) 2020; 20(3): e583.

Affiliation

Computer Science Department, Universitat Autònoma de Barcelona (UAB), 08193 Bellaterra, Spain.

Copyright

(Copyright © 2020, MDPI: Multidisciplinary Digital Publishing Institute)

DOI

10.3390/s20030583

PMID

31973078

Abstract

On-board vision systems may need to increase the number of classes that can be recognized in a relatively short period. For instance, a traffic sign recognition system may suddenly be required to recognize new signs. Since collecting and annotating samples of such new classes may need more time than we wish, especially for uncommon signs, we propose a method to generate these samples by combining synthetic images and Generative Adversarial Network (GAN) technology. In particular, the GAN is trained on synthetic and real-world samples from known classes to perform synthetic-to-real domain adaptation, but applied to synthetic samples of the new classes. Using the Tsinghua dataset with a synthetic counterpart, SYNTHIA-TS, we have run an extensive set of experiments. The results show that the proposed method is indeed effective, provided that we use a proper Convolutional Neural Network (CNN) to perform the traffic sign recognition (classification) task as well as a proper GAN to transform the synthetic images. Here, a ResNet101-based classifier and domain adaptation based on CycleGAN performed extremely well for a ratio ∼ 1 / 4 for new/known classes; even for more challenging ratios such as ∼ 4 / 1 , the results are also very positive.


Language: en

Keywords

CNNs; traffic sign recognition; training with synthetic data

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print