Nuclear War, the Black Swan We Can Never See

by | 21 November 2014

View in the Bulletin of Atomic Scientists

This article discusses the looming threat of nuclear war via the symbolism of the black swan—something that seems impossible, but still exists.

The article begins as follows:

Several centuries ago in England, the black swan was a popular symbol for the impossible because no such creature had ever been seen. Then came the surprise: Black swans were discovered in Australia. Since then, the bird has symbolized that which seems impossible but can in fact occur. The black swan reminds us that believing something cannot happen is often just a failure of imagination.

Parts of society today hold the same view of nuclear war that society in England did of black swans centuries ago: No nuclear war has ever been observed, so it may seem impossible that one would occur. Though nations possess some 16,000 nuclear warheads, deterrence just seems to work. And so, especially with the Cold War a fading memory, attention has shifted elsewhere. But it is just as much of a mistake to think that nuclear war couldn’t happen now as it was to think that black swans couldn’t exist back then.

It is true that, in any given year, nuclear war is unlikely, but the chance of it happening is not zero. Stanford professor emeritus Martin Hellman has a great way of explaining the risk. He compares it to a coin of unknown bias, flipped once a year for every year since the first Soviet nuclear weapon test in 1949. For 65 years, the coin has always landed on heads. If the coin had always landed flat on heads, we might think the probability of tails was close to zero. But in some years, the coin has teetered on its edge before falling on heads. Given this, should we still think the probability is near zero?

The remainder of the article is available in the Bulletin of Atomic Scientists.

Image credit: fir0002


This blog post was published on 28 July 2020 as part of a website overhaul and backdated to reflect the time of the publication of the work referenced here.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.