SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Chen JYC, Lakhmani SG, Stowers K, Selkowitz AR, Wright JL, Barnes M. Theor. Issues Ergonomics Sci. 2018; 19(3): 259-282.

Copyright

(Copyright © 2018, Informa - Taylor and Francis Group)

DOI

10.1080/1463922X.2017.1315750

PMID

unavailable

Abstract

Effective collaboration between humans and agents depends on humans maintaining an appropriate understanding of and calibrated trust in the judgment of their agent counterparts. The Situation Awareness-based Agent Transparency (SAT) model was proposed to support human awareness in human-agent teams. As agents transition from tools to artificial teammates, an expansion of the model is necessary to support teamwork paradigms, which require bidirectional transparency. We propose that an updated model can better inform human-agent interaction in paradigms involving more advanced agent teammates. This paper describes the model's use in three programmes of research, which exemplify the utility of the model in different contexts - an autonomous squad member, a mediator between a human and multiple subordinate robots, and a plan recommendation agent. Through this review, we show that the SAT model continues to be an effective tool for facilitating shared understanding and proper calibration of trust in human-agent teams.


Language: en

Keywords

autonomy; bidirectional communication; human-autonomy teaming; human–robot interaction; Transparency

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print