Deterrence, Without Nuclear Winter

by | 9 March 2015

View in the Bulletin of Atomic Scientists

This article discusses how nuclear winter can likely be minimized despite deterrence efforts.

The article begins as follows:

The biggest danger posed by today’s large nuclear arsenals is nuclear winter. One or two nuclear strikes could wreak devastating destruction on a few regions, but would not destroy human civilization as a whole. The roughly 16,300 nuclear weapons that currently exist, though, are more than enough to cause nuclear winter, which, through extreme cold conditions, ultraviolet radiation, and crop failures, could threaten the whole of humanity. If we fail to avoid nuclear winter, we could all die, or we could see civilization collapse, never to return.

That makes avoiding nuclear winter paramount. But the world’s major powers, in particular the United States and Russia, have long argued that their large nuclear arsenals are required for deterrence. Deterrence means threatening another party with some sort of harm in order to persuade it not to do something. In this case, it means threatening massive nuclear retaliation to dissuade another country from launching an attack itself. If two countries were to follow through on their threats of nuclear retaliation, mutual destruction would be assured. That deters both sides from starting a war. But nuclear deterrence can fail, as demonstrated during events like the Cuban missile crisis, when there have been escalations towards nuclear war. (Martin HellmanWard Wilson, and others have documented such events.)

As things stand now, a failure of deterrence could result in nuclear winter. It may be possible, though, for the world’s biggest nuclear powers to meet their deterrence needs without keeping the large nuclear arsenals they maintain today. They could practice a winter-safe deterrence, which would rely on weapons that pose no significant risk of nuclear winter.

The remainder of the article is available in the Bulletin of Atomic Scientists.

Image creditUnited States Office of War Information


This blog post was published on 28 July 2020 as part of a website overhaul and backdated to reflect the time of the publication of the work referenced here.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.