June Newsletter: The Winter-Safe Deterrence Controversy

by | 22 June 2015

Dear friends,

The last few months have gone well for GCRI. We have several new papers out, two new student affiliates, and some projects in the works that I hope to announce in an upcoming newsletter. Meanwhile, I’d like to share with you about a little controversy we recently found ourselves in.

The controversy surrounds a new research paper of mine titled Winter-safe deterrence: The risk of nuclear winter and its challenge to deterrence. The essence of winter-safe deterrence is to seek options for deterrence that would not risk nuclear winter or any other global catastrophe. Winter-safe deterrence is an effort to craft practical solutions for reducing global catastrophic risk, solutions that are feasible for the people who need to implement them. The nuclear-armed countries want to keep their nuclear weapons for deterrence. However, they may be willing to switch to deterrence with other weapons. This makes the world safe from nuclear winter without forcing countries to overhaul their security policy.

This sort of practical solution is at the heart of GCRI’s approach. It’s why our flagship integrated assessment project emphasizes stakeholder engagement: we strive to understand the risks and the solutions from stakeholders’ perspectives. This is vital for translating ideas into action.

The controversy arose because the paper suggested other types of weapons that could be used for deterrence. This is an inherently controversial exercise, suggesting which weapons might be good for threatening massive harm. What proved especially controversial was the paper’s suggestion for a possible role for non-contagious biological weapons. These weapons are banned by treaty and widely deplored, so it is no surprise that there would be controversy here. The biological weapons community responded vigorously. One thing that we have learned from this is that non-contagious biological weapons probably do not actually make for effective deterrents, due to some technical subtleties. Non-contagious biological weapons probably have no role in winter-safe deterrence.

The winter-safe deterrence controversy demonstrates the challenge and the importance of GCRI’s research agenda. Because GCRI works across the full range of global catastrophic risks, we will inevitably find solutions that help in some areas and hurt in others. We will continue to seek solutions that satisfy everyone, but this will not always be possible. Meanwhile, we will continue to promote open discussion in order to vet our ideas and make sure stakeholders’ voices are heard. Controversy cannot always be avoided, but it can be used to make progress on the underlying issues.

As always, thank you for your interest in our work. We welcome any comments, questions, and criticisms you may have.

Sincerely,
Seth Baum, Executive Director

GCR News Summaries

Here are Robert de Neufville’s monthly news summaries for February, March, April, and May. As always, these summarize recent events across the breadth of GCR topics.

New Junior Associates: Jessica Cianci and Trevor White

GCRI welcomes two new Junior Associates. Both are students working with us on exciting research projects.

Jessica Cianci is a B.A. student in anthropology and psychology at American University. Her project studies what motivates people to work on global catastrophic risk and other major global issues.

Trevor White a J.D. student at Cornell Law School. His project studies legal liability for risks posed by autonomous machines, from self-driving cars to superintelligence.

Denkenberger Takes Faculty Position

Congratulations to GCRI Associate David Denkenberger, who has taken a faculty position as Assistant Professor of Architectural Engineering at Tennessee State University. Denkenberger will continue to do research on global catastrophic risk as part of this position. Denkenberger’s GCR research has focused on innovative risk-reducing technologies, including refuges and alternative foods.

Media Coverage

GCRI was featured in a recent article in Quartz, together with our colleagues at the Oxford University Future of Humanity Institute and the Cambridge University Centre for the Study of Existential Risk. The article is Meet the people out to stop humanity from destroying itself by Kabir Chibber.

GCRI Events

Jacob Haqq-Misra hosted a poster session Life in the Anthropocene: The Future of Earth’s Biosphere at AbSciCon, the Astrobiology Science Conference, 15-19 June in Chicago.

Seth Baum will participate in a panel discussion Understanding the Shape of Things to Come with Anthony Janetos of the Boston University Pardee Center for the Study of the Longer-Range Future, Nicolas Miailhe of the Harvard Kennedy School Future Society, and Richard Mallah of the Future of Life Institute. The panel is part of the event Poles Apart, Melting Together: Science & the Humanities Confront the Anthropocene, hosted by the Boston University Center for Interdisciplinary Teaching & Learning, 27 June in Boston.

New Book: Artificial Superintelligence

New GCRI Associate Roman Yampolskiy has published a new book Artificial Superintelligence: A Futuristic Approach. The book is designed to be a foundational text for the new science of AI safety engineering, concerned with mitigating global catastrophic risks from above human-level AI. It is written for AI researchers and students, computer security researchers, futurists, and philosophers.

New Futures Special Issue Papers

Several new papers to be published in the Futures special issue “Confronting future catastrophic threats to humanity” co-edited by Seth Baum of GCRI and Bruce Tonn of University of Tennessee are now online:

Global catastrophic risk and security implications of quantum computers by Andy Majot and Roman Yampolskiy. This paper examines the potential for quantum computers to thwart cryptography, destabilizing economic and political systems.

The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives by Seth Baum. This paper develops the concept of practical solutions for reducing global catastrophic risk and surveys the options for each major type of global catastrophic risk. These ideas are central to GCRI’s integrated assessment project.

Confronting the threat of nuclear winter by Seth Baum. This paper surveys the range of options for reducing nuclear winter risk: reducing the probability of nuclear war, reducing the severity of nuclear winter if nuclear war occurs, and helping people survive nuclear winter.

Isolated refuges for surviving global catastrophes by Seth Baum, David Denkenberger, and Jacob Haqq-Misra. This paper analyzes design challenges posed by refuges that could help keep people alive through a wide range of global catastrophes.

Linking simulation argument to the AI risk by Milan Ćirković. This paper examines the idea that our world may be a computer simulation. The AI for these computer simulations could require global coordination, but the same global coordination also makes it easier to prohibit the simulations. Thus, it is less likely that our world is a computer simulation, and also less likely that humanity will be destroyed by a computer simulation shutdown.

New Research Papers

GCRI has three other new research papers out:

Resilience to global food supply catastrophes by Seth Baum, David Denkenberger, Joshua Pearce, Alan Robock, and Richelle Winkler, forthcoming in Environment, Systems, and Decisions. This paper surveys options for keeping humanity alive during global catastrophes that reduce the food supply. The options include traditional agriculture, food stockpiles, and “alternative” foods powered by fossil fuels or stored biomass.

Risk and resilience for unknown, unquantifiable, systemic, and unlikely/catastrophic threats by Seth Baum, forthcoming in Environment, Systems, and Decisions. This paper compares the risk and resilience paradigms for four types of threats.

Winter-safe deterrence: The risk of nuclear winter and its challenge to deterrence by Seth Baum, published in Contemporary Security Policy. This paper is described above.

New Popular Articles

Steven Umbrello has a new article The Atlas burden: The cost of America’s nuclear arsenal published in the Institute for Ethics and Emerging Technologies. This discusses the costs and risks of nuclear arsenals and alternatives for deterrence.

Seth Baum has several new articles in the Bulletin of the Atomic Scientists:

Should nuclear devices be used to stop asteroids?, on the dilemma of nuclear disarmament vs. using nuclear weapons for protecting Earth from asteroids and comets.

Is stratospheric geoengineering worth the risk?, on risks associated with the abrupt halt of stratospheric geoengineering vs. the risks of regular global warming.

Deterrence, without nuclear winter, on winter-safe deterrence. The Bulletin also hosted a roundtable discussion The winter-safe deterrence debate.

Stopping killer robots and other future threats, on the campaign to ban fully autonomous weapons and the general importance of regulating emerging technologies before they’re built.

Baum also has several other new popular articles:

The risk of nuclear winter, published in Public Interest Reports, a publication of the Federation of American Scientists. This article summarizes current research on nuclear winter risk and discusses policy implications.

Getting smart about global catastrophes, published in Medium in a special section 7 Days of Genius from the event Baum spoke at with Jeff Sachs and Max Tegmark. This article discusses the importance of solutions that bring large global catastrophic risk reductions and make sense from stakeholder perspectives, as in GCRI’s integrated assessment project.

What are the best ways to prevent global catastrophe?, published by the Institute for Ethics and Emerging Technologies. This article summarizes GCRI’s integrated assessment project.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.