SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Adam D. Nature 2024; ePub(ePub): ePub.

Copyright

(Copyright © 2024, Holtzbrinck Springer Nature Publishing Group)

DOI

10.1038/d41586-024-01029-0

PMID

38653827

Abstract

Autonomous weapons guided by artificial intelligence are already in use. Researchers, legal experts and ethicists are struggling with what should be allowed on the battlefield.

In the conflict between Russia and Ukraine, video footage has shown drones penetrating deep into Russian territory, more than 1,000 kilometres from the border, and destroying oil and gas infrastructure. It's likely, experts say, that artificial intelligence (AI) is helping to direct the drones to their targets. For such weapons, no person needs to hold the trigger or make the final decision to detonate.

The development of lethal autonomous weapons (LAWs), including AI-equipped drones, is on the rise. The US Department of Defense, for example, has earmarked US$1 billion so far for its Replicator programme, which aims to build a fleet of small, weaponized autonomous vehicles. Experimental submarines, tanks and ships have been made that use AI to pilot themselves and shoot. Commercially available drones can use AI image recognition to zero in on targets and blow them up. LAWs do not need AI to operate, but the technology adds speed, specificity and the ability to evade defences. Some observers fear a future in which swarms of cheap AI drones could be dispatched by any faction to take out a specific person, using facial recognition.

Warfare is a relatively simple application for AI. "The technical capability for a system to find a human being and kill them is much easier than to develop a self-driving car. It's a graduate-student project," says Stuart Russell, a computer scientist at the University of California, Berkeley, and a prominent campaigner against AI weapons. He helped to produce a viral 2017 video called Slaughterbots that highlighted the possible risks.

The emergence of AI on the battlefield has spurred debate among researchers, legal experts and ethicists. Some argue that AI-assisted weapons could be more accurate than human-guided ones, potentially reducing both collateral damage -- such as civilian casualties and damage to residential areas -- and the numbers of soldiers killed and maimed, while helping vulnerable nations and groups to defend themselves. Others emphasize that autonomous weapons could make catastrophic mistakes. And many observers have overarching ethical concerns about passing targeting decisions to an algorithm. ...


Language: en

Keywords

Ethics; Machine learning; Technology

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print