SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Garcia KR, Mishler S, Xiao Y, Wang C, Hu B, Still JD, Chen J. J. Cogn. Eng. Decis. Mak. 2022; 16(4): 237-251.

Copyright

(Copyright © 2022, Human Factors and Ergonomics Society, Publisher SAGE Publishing)

DOI

10.1177/15553434221117001

PMID

unavailable

Abstract

Automated Driving Systems (ADS), like many other systems people use today, depend on successful Artificial Intelligence (AI) for safe roadway operations. In ADS, an essential function completed by AI is the computer vision techniques for detecting roadway signs by vehicles. The AI, though, is not always reliable and sometimes requires the human?s intelligence to complete a task. For the human to collaborate with the AI, it is critical to understand the human?s perception of AI. In the present study, we investigated how human drivers perceive the AI?s capabilities in a driving context where a stop sign is compromised and how knowledge, experience, and trust related to AI play a role. We found that participants with more knowledge of AI tended to trust AI more, and those who reported more experience with AI had a greater understanding of AI. Participants correctly deduced that a maliciously manipulated stop sign would be more difficult for AI to identify. Nevertheless, participants still overestimated the AI?s ability to recognize the malicious stop sign. Our findings suggest that the public do not yet have a sufficiently accurate understanding of specific AI systems, which leads them to over-trust the AI in certain conditions.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print