Risk-Risk Tradeoff Analysis of Nuclear Explosives for Asteroid Deflection

by | 17 June 2019

Download Preprint PDF

If an asteroid is found to be on collision course with Earth, it may be possible to deflect it away. One way of deflecting asteroids would be to use nuclear explosives. A nuclear deflection program may reduce the risk of an asteroid collision, but it might also inadvertently increase the risk of nuclear war or other violent conflict. This paper analyzes this potential tradeoff and evaluates its policy implications. The paper is published in the journal Risk Analysis, the flagship journal of the Society for Risk Analysis.

(Note: nuclear “explosives” and nuclear “weapons” are the same physical objects. The term “weapon” is used when the objects are intended for military purposes, but the same objects can also be used for other purposes, including asteroid deflection.)

Nuclear deflection is an example of a risk-risk tradeoff, i.e. when a possible action could decrease one risk but increase another. Evaluating these tradeoffs requires quantitative risk analysis: in risk terms, the action would only be beneficial if it decreases the one risk more than it increases the other. Risk-risk tradeoff analysis is important for effective decision-making on global catastrophic risk. Without it, one could inadvertently take actions that increase the risk.

There are many cases of risk-risk tradeoffs involving global catastrophic risks. In addition to nuclear deflection, other examples include nuclear power (which can reduce climate change risk and increase nuclear war risk), advanced AI (which could reduce a variety of risks and create new ones), stratospheric geoengineering (which could reduce climate change risk if successful or increase climate change risk if it fails), and global government schemes (which could reduce risks by improving international cooperation and increase the risk of repressive totalitarianism). This paper presents a framework that can be used for global catastrophic risk-risk tradeoff analysis and discusses some general analytical challenges.

A central challenge to risk-risk tradeoff analysis for global catastrophic risks is that the risks are difficult to quantify. The case of nuclear deflection is no exception. Asteroid risk is perhaps the most well quantified global catastrophic risk, though significant uncertainties remain, especially in the human consequences, as documented in the recent GCRI paper Uncertain human consequences in asteroid risk analysis and the global catastrophe threshold. Violent conflict risk is harder to quantify because it depends on complex social and geopolitical consequences. This holds in particular for extreme conflict scenarios such as those involving nuclear weapons. The challenge of quantifying nuclear war risk is discussed in recent GCRI paper Reflections on the risk analysis of nuclear war.

Risk-risk tradeoff analysis additionally requires quantifying the effects that the potential action would have on the risks in question. In the case of nuclear deflection programs, several potential effects merit attention. For asteroid risk, the paper analyzes the effect of nuclear deflection given the availability of other techniques for deflecting asteroids. Those other techniques are less controversial and may be a favorable option for many deflection missions. For violent conflict risk, the paper analyzes potential effects of nuclear deflection on nuclear disarmament and the use of nuclear weapons in violence.

The paper does not reach a definitive conclusion on whether nuclear deflection programs cause a net increase or decrease in risk. The main point of uncertainty is in the risk of violent conflict. Indeed, it is not clear whether nuclear weapons increase the risk of violent conflict (by increasing the severity of major wars) or decrease the risk (by improving the effectiveness of deterrence and thereby reducing the probability of major wars). The paper presents my own judgment, which is that nuclear deflection will tend to cause a net increase in risk. However, as the paper states, I have low confidence in this assessment. The paper is a first attempt at a complex topic, and subsequent analysis may point to different conclusions. (More precisely, the paper is the first attempt with the detail of a full-length research paper. There have been several shorter studies, including a 2015 article I published in the Bulletin of the Atomic Scientists, Should nuclear devices be used to stop asteroids?)

Academic citation:
Seth D. Baum, 2019. Risk-risk tradeoff analysis of nuclear explosives for asteroid deflection. Risk Analysis, vol. 39, no. 11 (November), pages 2427-2442, DOI 10.1111/risa.13339.

Download Preprint PDFView in Risk Analysis

Image credit: NASA/JPL

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.