On Winter-Safe Deterrence and Biological Weapons

by | 20 March 2015

View in the Bulletin of Atomic Scientists

This article discusses tradeoffs on the use of biological weapons in the face of greater threats such as global catastrophes.

The article begins as follows:

The status quo of large nuclear arsenals risks global catastrophe so severe that human civilization may never recover. If we care about the survival of human civilization, then we should seek solutions to make sure such a catastrophe never occurs. This is my starting point for exploring the potential for winter-safe deterrence. Perhaps, upon closer inspection, winter-safe deterrence will prove infeasible. But I do believe it is worth closer inspection. To that effect, I welcome this roundtable discussion and the broader conversation that my research has sparked.

I have defined winter-safe deterrence as military force capable of meeting the deterrence goals of today’s nuclear-armed states without risking catastrophic nuclear winter. Winter-safe deterrence recognizes two basic issues: first, large nuclear arsenals pose a devastating catastrophic risk; and second, nuclear-armed states may refuse to relinquish almost the entirety of their nuclear arsenals unless their deterrence goals are met through other means. Winter-safe deterrence thus aims to make the world safer given the politics of deterrence.

Personally, I would prefer that today’s nuclear-armed states simply decide that they no longer need deterrence based on the threat of massive destruction. Such destruction is immoral and against the spirit of international humanitarian law. To that effect, we should seek solutions that reduce the demand for this sort of deterrence, for example by improving relations between nuclear-armed states or by stigmatizing this deterrence. The ongoing initiative on the humanitarian impacts of nuclear weapons is enhancing precisely this stigma. I am proud to support the initiative. It is a first-best solution to nuclear war risk.

The remainder of the article is available in the Bulletin of Atomic Scientists.

Image creditUnited States Office of War Information


This blog post was published on 28 July 2020 as part of a website overhaul and backdated to reflect the time of the publication of the work referenced here.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.