Stopping Killer Robots and Other Future Threats

by | 22 February 2015

View in the Bulletin of Atomic Scientists

This article discusses the dangers of autonomous weapons and how humanity must be proactive against new, potentially harmful, weapons technologies.

The article begins as follows:

Only twice in history have nations come together to ban a weapon before it was ever used. In 1868, the Great Powers agreed under the Saint Petersburg Declaration to ban exploding bullets, which by spreading metal fragments inside a victim’s body could cause more suffering than the regular kind. And the 1995 Protocol on Blinding Laser Weapons now has 104 signatories, who have agreed to ban the weapons on the grounds that they could cause excessive suffering to soldiers in the form of permanent blindness.

Today a group of non-governmental organizations is working to outlaw another yet-to-be used device, the fully autonomous weapon or killer robot. In 2012 the group formed the Campaign to Stop Killer Robots to push for a ban. Different from the remotely piloted unmanned aerial vehicles in common use today, fully autonomous weapons are military robots designed to make strike decisions for themselves. Once deployed, they identify targets and attack them without any human permission. None currently exist, but China, Israel, Russia, the United Kingdom, and the United States are actively developing precursor technology, according to the campaign.

It’s important that the Campaign to Stop Killer Robots succeed, either at achieving an outright ban or at sparking debate resulting in some other sensible and effective regulation. This is vital not just to prevent fully autonomous weapons from causing harm; an effective movement will also show us how to proactively ban other future military technology.

The remainder of the article is available in the Bulletin of Atomic Scientists.

Image credit: US Air Force


This blog post was published on 28 July 2020 as part of a website overhaul and backdated to reflect the time of the publication of the work referenced here.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.