SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Russell S. Nature 2023; 614(7949): 620-623.

Copyright

(Copyright © 2023, Holtzbrinck Springer Nature Publishing Group)

DOI

10.1038/d41586-023-00511-5

PMID

36810886

Abstract

One year since Russia's invasion, an arms race in artificial-intelligence (AI) weaponry is being played out on Ukrainian soil. Western audiences cheer when plucky Ukrainian forces use modified commercial quadcopters to drop grenades on Russian soldiers. They boo when brutal Russian forces send swarms of cheap Iranian cruise missiles to destroy hospitals, power plants and apartment blocks. But this simple 'us versus them' narrative obscures a disturbing trend -- weapons are becoming ever smarter.

Soon, fully autonomous lethal weapon systems could become commonplace in conflict. Some are already on the market. Mercifully, few have actually been used in warfare, and none has been used in Ukraine, at the time of writing. Yet evolving events are a cause for concern.

The inevitable logic of using electronic countermeasures against remotely operated weapons is driving both sides towards increasing the level of autonomy of those weapons. That is pushing us ever closer to a dangerous world where lethal autonomous weapon systems are cheap and widely available tools for inflicting mass casualties -- weapons of mass destruction found in every arms supermarket, for sale to any dictator, warlord or terrorist.

Although it is difficult to discuss banning weapons that might help the Ukrainian cause, it is now urgent that world governments do so and limit the use of AI in war. No one wants this bleak future of robotic threats.

As a start, governments need to begin serious negotiations on a treaty to ban anti-personnel autonomous weapons, at the very least. Professional societies in AI and robotics should develop and enforce codes of conduct outlawing work on lethal autonomous weapons. And people the world over should understand that allowing algorithms to decide to kill humans is a terrible idea.


Language: en

Keywords

Policy; Machine learning; Technology; Computer science; Engineering

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print