SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Saadatnejad S, Bahari M, Khorsandi P, Saneian M, Moosavi-Dezfooli SM, Alahi A. Transp. Res. C Emerg. Technol. 2022; 141: e103705.

Copyright

(Copyright © 2022, Elsevier Publishing)

DOI

10.1016/j.trc.2022.103705

PMID

unavailable

Abstract

Our transportation field has recently witnessed an arms race of neural network-based trajectory predictors. While these predictors are at the core of many applications such as autonomous navigation or pedestrian flow simulations, their adversarial robustness has not been carefully studied. In this paper, we introduce a socially-attended attack to assess the social understanding of prediction models in terms of collision avoidance. An attack is a small yet carefully-crafted perturbations to fail predictors. Technically, we define collision as a failure mode of the output, and propose hard- and soft-attention mechanisms to guide our attack. Thanks to our attack, we shed light on the limitations of the current models in terms of their social understanding. We demonstrate the strengths of our method on the recent trajectory prediction models. Finally, we show that our attack can be employed to increase the social understanding of state-of-the-art models. The code is available at https://s-attack.github.io/.


Language: en

Keywords

Adversarial attack; Human social behavior simulation; Human trajectory prediction; Robustness

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print