Analyzing and Reducing the Risks of Inadvertent Nuclear War Between the United States and Russia

by | 9 January 2013

Download Preprint PDF

Inadvertent nuclear war as defined in this paper occurs when one nation mistakenly concludes that it is under attack and launches nuclear weapons in what it believes to be a counterattack. A US-Russia nuclear war would be a major global catastrophe since these countries still possess thousands of nuclear weapons. Despite the end of the Cold War, the risk remains. This paper develops a detailed mathematical “fault tree” model to analyze the ongoing risk of inadvertent US-Russia nuclear war. 

The fault tree model. A fault tree is a scheme for modeling events and conditions that could result in some final event. Here, the final event is inadvertent US-Russia nuclear war. Initial events could include research rockets, as in the 1995 Norwegian rocket incident (involving a Black Brandt XII rocket as pictured above), faulty computer chips, wild animal activity, and nuclear terrorist attacks. The nation that detects the initial event, US or Russia, then goes through decision procedures to evaluate the event and decide whether to launch nuclear weapons in response. The model takes data on the probabilities of initial events going through the decision procedures to calculate the probability of the final event. Different data is used during conditions of low or high US-Russia tensions. During high tensions, weapons launch is more likely because the initial event is more likely to be interpreted as a real attack instead of a false alarm. 

Annual probability of inadvertent US-Russia nuclear war. The paper calculates the annual probabilityof inadvertent US-Russia nuclear war, meaning the probability of the war occurring during the course of a year. While much remains uncertain, the paper finds that there may still be significant risk of inadvertent US-Russia nuclear war. This general finding holds even if there are fewer initial false alarm events than there were during the Cold War. The finding also holds even if inadvertent war could only occur during high tensions, although inadvertent war is found to be more likely if it could also occur during low tensions. 

Options for risk reduction. The paper considers two options for reducing the risk of inadvertent US-Russia nuclear war. First, each nation’s submarines with nuclear weapons could be moved further from the other’s border, to give the other nation more time to decide if an initial event is a real submarine-launched attack or a false alarm. Second, the nations could lower their level of alert for part of the time, so that they will be less likely to conclude that an initial event is an attack. Both options were found to reduce the risk of inadvertent nuclear war. However, before recommending these options, further analysis is needed to assess the effects they have on other risks, including intentional (non-inadvertent) nuclear war. 

Academic citation:
Anthony M. Barrett, Seth D. Baum, and Kelly R. Hostetler, 2013. Analyzing and reducing the risks of inadvertent nuclear war between the United States and Russia. Science and Global Security, vol. 21, no. 2, pages 106-133, DOI 10.1080/08929882.2013.798984.

Download Preprint PDFView at Science and Global Security

Image credit: NASA


This blog post was published on 28 July 2020 as part of a website overhaul and backdated to reflect the time of the publication of the work referenced here.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.