SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Gibert M, Martin D. AI Soc. 2022; 37(1): 319-330.

Copyright

(Copyright © 2022, Holtzbrinck Springer Nature Publishing Group)

DOI

10.1007/s00146-021-01179-z

PMID

unavailable

Abstract

Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence (AI) system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the particular argument, which leads us to move to a different one. We leave the idea of indirect duties aside since these duties do not imply considering an AI system for its own sake. The paper rejects the relational argument and the argument from intelligence. The argument from life may lead us to grant a moral status to an AI system, but only in a weak sense. Sentience, by contrast, is a strong argument for the moral status of an AI system--based, among other things, on the Aristotelian principle of equality: that same cases should be treated in the same way. The paper points out, however, that no AI system is sentient given the current level of technological development.


Language: en

Keywords

Artificial intelligence (AI); Biocentrism; Ethics; Moral status; Pathocentrism; Sentience

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print