Tony Barrett Gives CSIS Practice Talk on Inadvertent Nuclear War to GCRI Nuclear War Group

by | 28 October 2012

On Thursday 11 October 2012,* GCRI hosted the second of a series of discussions among a group of nuclear war scholars. The discussion centered around a practice talk that Tony Barrett gave for an upcoming conference at the Center for Strategic and International Studies (CSIS).
* We apologize for the delays in getting this post online.

Meeting participants included Martin Hellman of Stanford and Seth Baum, Tony Barrett, and Jacob Haqq-Misra, all of GCRI.

Barrett’s talk, Analyzing and Reducing the Risks of Inadvertent Nuclear War between the United States and Russia, was prepared for the fall 2012 conference of the CSIS Project on Nuclear Issues. The talk was based on research Barrett lead for GCRI with Seth Baum and Kelly Hostetler.

Inadvertent nuclear war occurs when one nation believes incorrectly that it is under attack, and then uses nuclear weapons in what it believes is a counterattack. The result is a nuclear war that happens “by accident”. While inadvertent nuclear war has never previously occurred, there have been several close calls, including the 1983 Able Archer incident and the 1995 Norwegian rocket incident.

For the previous GCRI nuclear war group discussion, see GCRI Hosts Discussion Of Nuclear War.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.