False Alarms, True Dangers? Current and Future Risks of Inadvertent U.S.-Russian Nuclear War

by | 1 January 2016

View in RAND

In the post–Cold War era, it is tempting to see the threat of nuclear war between the United States and Russia as remote: Both nations’ nuclear arsenals have shrunk since their Cold War peaks, and neither nation is actively threatening the other with war. A number of analysts, however, warn of the risk of an inadvertent nuclear conflict between the United States and Russia — that is, a conflict that begins when one nation misinterprets an event (such as a training exercise, a weather phenomenon, or a malfunction) as an indicator of a nuclear attack or a provocation.

Understanding how miscalculations and misperceptions can lead to the use of nuclear weapons is an important step toward reducing the probability of an inadvertent nuclear conflict. At present, the United States does not appear to have a consistently used method for assessing the risk of inadvertent nuclear war. To address this gap, this report synthesizes key points from the literature on the pathways by which, and the conditions under which, misinterpretations could lead to a nuclear strike, either by U.S. or Russian forces. By shedding light on these risks, this report hopes to inform decisionmakers about measures that both nations can take to reduce the probability of an inadvertent nuclear conflict.

Recommendations

  • To help reassure Russian leaders that no U.S. attack is occurring — and thus reduce the probability of Russian nuclear use in an early warning false alarm — the United States should consider steps to compensate for the current limitations in the coverage and reliability of Russian early warning systems, such as the Pechora Radar Station shown above. 
  • The United States should acknowledge and encourage actions by Russia to make its own investments to improve early warning systems (to increase the probability that Russian leaders would be able to tell the difference between an early warning system false alarm and an actual incoming attack) and to improve the survivability of Russian forces and command and control systems (to reduce the perceived threat of a U.S. first strike). 
  • The United States should consider making observable but reasonable adjustments to its own forces to reduce its threat to Russian second-strike capability. 
  • The United States should consider avoiding further development of EMP weapons that could seem aimed at Russian command and control disruption. 
  • The United States should do what it can to reduce the probability that Russia will activate the Dead Hand system. 

Academic citation:
Anthony Barrett, 2016. False alarms, true dangers? Current and future risks of inadvertent U.S.-Russian nuclear war. RAND Corporation, document PE-191-TSF, DOI 10.7249/PE191.

View in RAND

Image credit: Ivan Z


This blog post was published on 28 July 2020 as part of a website overhaul and backdated to reflect the time of the publication of the work referenced here.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.