SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Atchley A, Barr HM, O'Hear EH, Gray CE, Chesser AF, Jones N, Tenhundfeld NL. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2022; 66(1): 187-191.

Copyright

(Copyright © 2022, Human Factors and Ergonomics Society, Publisher SAGE Publishing)

DOI

10.1177/1071181322661111

PMID

unavailable

Abstract

As autonomous systems become responsible for more complex decisions, it is crucial to consider how these systems will respond in situations wherein they must make potentially controversial decisions without input from users. While previous literature has suggested that users prefer machinelike systems that act to promote the greater good, little research has focused on how the humanlikeness of an agent influences how moral decisions are perceived. We ran two online studies where participants and an automated agent made a decision in an adapted trolley problem. Our results conflicted with previous literature as they did not support the idea that humanlike agents are trusted in a manner analogous to humans in moral dilemmas. Conversely, our study did support the importance for trust of shared moral view between users and systems. Further investigation is necessary to clarify how humanlikeness and moral view interact to form impressions of trust in a system.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print