Nuclear War Group Discusses Ongoing Risk Of US-Russia Nuclear War

by | 30 December 2012

On Thursday 29 November, GCRI hosted the fourth of a series of discussions among a group of nuclear war scholars. This discussion focused on the ongoing risk of an all-out nuclear war between the United States and Russia.

Meeting participants included Martin Hellman of Stanford, Benoit Pelopidas of Bristol, James Scouras of the Johns Hopkins University Applied Physics Laboratory, and Tony Barret, Seth Baum, Jacob Haqq-Misra, and Tim Maher, all of GCRI.

The reason for focusing on US-Russia nuclear war is simple: the US and Russia still have the overwhelming majority of the world’s nuclear weapons. According to the Federation of American Scientists, the US now has 2,150 operational nuclear weapons and 8,000 total, while Russia has 1,800 operational and 10,000 total. (Operational weapons are those available for more immediate use, whereas the others are in some sort of storage.) While these arsenals are less than half of the Cold War peaks [1], they’re still enough to cause grave damage. But with the Cold War over, how likely is this?

The key statistic here is the probability of US-Russia nuclear war per unit time. The 1% annual probability previously estimated by Martin Hellman [2] corresponds to approximately 10% per decade, 63% per century, and very close to 100% per millennium – that is, if the annual probability stays the same over time. [3] The point is, unless we do something to make it less likely that a US-Russia nuclear war would occur during any given year, then eventually the war is virtually guaranteed to occur. Note that this logic also applies to other types of events, including other possible global catastrophes, not just US-Russia nuclear war.

With the Cold War over, it may seem to many Americans that the annual probability of US-Russia nuclear war is quite low. But this sentiment is not universally shared. For example, earlier in 2012, then-Presidential candidate Mitt Romney referred to Russia as the US’s “number one geopolitical foe”. Previously, US Senator John McCain wrote on Twitter “Dear Vlad, The #ArabSpring is coming to a neighborhood near you”. These comments suggest ongoing American concerns about Russia.

Meanwhile Russia’s ongoing concerns about the US may be even greater. Many in Russia retain the Cold War view of the US as seeking global domination. Two big factors are the ongoing expansion of NATO into eastern Europe and NATO’s involvement in the 2008 South Ossetia (Georgia) war, the latter of which came close to having US and Russian troops firing at each other. Indeed, Putin thanked Romney for Romney’s “number one geopolitical foe” comment because it draws attention to and helps confirm Putin’s concerns about NATO plans for a missile defense shield in Eastern Europe.

An ongoing challenge then is understanding the other side’s perspective. At least some Americans may underestimate US-Russia tensions because we fail to see Russia’s perspective. To us, NATO expansion may be about peace or democracy, but to Russia, it is about global domination. The intelligence community refers to this as “mirror imaging” [4]:

One kind of assumption an analyst should always recognize and question is mirror-imaging–filling gaps in the analyst’s own knowledge by assuming that the other side is likely to act in a certain way because that is how the US would act under similar circumstances. To say, “if I were a Russian intelligence officer …” or “if I were running the Indian Government …” is mirror-imaging. Analysts may have to do that when they do not know how the Russian intelligence officer or the Indian Government is really thinking. But mirror-imaging leads to dangerous assumptions, because people in other cultures do not think the way we do. cia.gov

There are other factors in the annual probability of US-Russia nuclear war, such as the possibility of the war being catalyzed inadvertently by some type of false alarm event, as the GCRI nuclear war group discussed previously. A full analysis of the annual probability would require combining intentional and inadvertent nuclear war. This is a project left for future research.

For the previous GCRI nuclear war group discussion, see GCRI Nuclear War Group Discusses Nuclear Winter, Tony Barrett Gives CSIS Practice Talk On Inadvertent Nuclear War To GCRI Nuclear War Group, and GCRI Hosts Discussion Of Nuclear War.

[1] See Robert S. Norris and Hans M. Kristensen, Global nuclear stockpiles, 1945-2006, Bulletin of the Atomic Scientists 62, no. 4 (July/August 2006), 64-66, and Natural Resources Defense Council, Archive of Nuclear Data.

[2] Martin E. Hellman, Risk Analysis of Nuclear Deterrence, The Bent of Tau Beta Pi, Vol. 99, No. 2, pp. 14-22, Spring 2008.

[3] Here’s how these numbers are calculated. First, we assume that the 1% annual deterrence failure probability estimate is for a stationary Bernoulli process. Then for constant annual probability p, the probability during n years is 1 – (1-p)^n. Thus even with 1% per year, there will still be some centuries that happen to not have any such events, as well as some centuries that would happen to have more than one, were that possible in the case of nuclear war. Please note that these calculations assume that the annual probability is constant over time, which is a strong assumption given the tremendous uncertainty that exists about what our world will look like over upcoming decades, centuries, and beyond.

[4] See also http://en.wikipedia.org/wiki/Cognitive_traps_for_intelligence_analysis

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.