SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Unal D, Catak FO, Houkan MT, Mudassir M, Hammoudeh M. ISA Trans. 2022; ePub(ePub): ePub.

Copyright

(Copyright © 2022, Instrument Society of America, Publisher Elsevier Publishing)

DOI

10.1016/j.isatra.2022.11.007

PMID

36435643

Abstract

Correct environmental perception of objects on the road is vital for the safety of autonomous driving. Making appropriate decisions by the autonomous driving algorithm could be hindered by data perturbations and more recently, by adversarial attacks. We propose an adversarial test input generation approach based on uncertainty to make the machine learningĀ (ML) model more robust against data perturbations and adversarial attacks. Adversarial attacks and uncertain inputs can affect the ML model's performance, which can have severe consequences such as the misclassification of objects on the road by autonomous vehicles, leading to incorrect decision-making. We show that we can obtain more robust ML models for autonomous driving by making a dataset that includes highly-uncertain adversarial test inputs during the re-training phase. We demonstrate an improvement in the accuracy of the robust model by more thanĀ 12%, with a notable drop in the uncertainty of the decisions returned by the model. We believe our approach will assist in further developing risk-aware autonomous systems.


Language: en

Keywords

Uncertainty; DL; Risk-aware autonomous systems; Test set generation

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print