February Newsletter: Nuclear War Risk Analysis

by | 28 February 2019

Dear friends,

In order to most effectively reduce the risk of global catastrophe, it is often essential to have a quantitative understanding of the risk. It is particularly essential when we are faced with decisions that involve tradeoffs between different risks and decisions that require prioritizing among multiple risks. For this reason, GCRI has long been at the forefront of the risk and decision analysis of global catastrophic risk.

This month, we announce a new paper, “Reflections on the Risk Analysis of Nuclear War”. This paper summarizes the state of nuclear war risk analysis, including work by GCRI, and reflects on what it means for research and policy. On one hand, quantitative risk analysis is essential for guiding major policy questions like whether nuclear weapons should be disarmed. On the other hand, risk analysis can struggle to yield clear answers to these questions, and policymakers are not necessarily seeking risk input to their decisions. The paper concludes that there should be more effort on nuclear war risk analysis, but that there is an especially strong need for engagement with policymakers on nuclear war risk.

While the paper is focused on nuclear war, the issues are more general. Indeed, the difficulty of risk quantification is a characteristic of all of the global catastrophic risks. We need to get better at this, or else we may essentially be flying blind on many important decisions. Improving the state of risk analysis for the global catastrophic risks will thus remain an important priority for GCRI.

Sincerely,
Seth Baum, Executive Director


Nuclear War


GCRI Executive Director Seth Baum has published a new paper titled “Reflections on the Risk Analysis of Nuclear War” in the Proceedings of the One Day Workshop on Quantifying Global Catastrophic Risks which was hosted last year by the UCLA Garrick Institute for the Risk Sciences.

GCRI Director of Communications Robert de Neufville participated in a panel discussion on “Digital Disinformation, Cyber Meddling, and Mean Tweets: A Look Back at the Hawaii Alert—What If?” with Andrew Futter, Nicole GroveKatie Joseff, and Jaclyn Kerr at the This Is Not a Drill Journalism Workshop hosted by the Stanley Foundation and Atomic Reporters on January 10 in Honolulu, HI. 

Artificial Intelligence

Baum participated in a panel discussion on “Governance of Artificial General Intelligence Emergence and Early Use” with Miles BrundagePeter Eckersley, and Helen Toner and moderated by Anthony Aguirre at the Beneficial AGI 2019 hosted by the Future of Life Institute on January 6 in Rio Grande, Puerto Rico.

Baum also gave a talk titled “The Role of Environmental Expertise in Understanding and Addressing AI Issues”, at a workshop on Human-Machine-Ecology hosted by the Princeton University Global Systemic Risk group and the Stockholm Resilience Centre on January 11 in Princeton, NJ.

Long-Term Future

The BBC published a detailed article by Richard Fisher, “The perils of Short-Termism: Civilisation’s Greatest Threat”, on contemporary society’s tendency to focus on short-term issues. The paper discusses “Long-Term Trajectories of Human Civilization”, a paper led by Baum.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.