July Newsletter: Asteroid-Nuclear Risk Analysis

by | 16 July 2019

Dear friends,

One reason it is important to analyze the global catastrophic risks quantitatively is that some decisions involve tradeoffs between them. An action may reduce one risk while increasing another. It’s important to know whether the decrease in the one risk is large enough to offset the increase in the other.

This month, we announce a new paper that presents a detailed analysis of one such decision: the use of nuclear explosives to deflect Earthbound asteroids away. Nuclear deflection is an option under active consideration by the asteroid risk community, but it may increase the risk of nuclear war or of other violent conflict. The paper does not reach a clear conclusion on whether nuclear deflection would bring a net risk reduction, due mainly to uncertainty in the risk of violent conflict. Instead, the paper’s value comes from laying out the risk-risk tradeoff and making progress on the analysis. The paper also presents a model of global catastrophic risk-risk tradeoff analysis that can be adapted for other tradeoffs between global catastrophic risks.

The paper is “Risk-Risk Tradeoff Analysis of Nuclear Explosives for Asteroid Deflection”. It is published in Risk Analysis, the flagship journal of the Society for Risk Analysis.

Sincerely,
Seth Baum
Executive Director

Artificial Intelligence

GCRI Executive Director Seth Baum gave a talk titled “Intermediate-Term Artificial Intelligence & Society” at the Center for Human-Compatible Artificial Intelligence (CHAI)  in Berkeley, CA on June 26.

Emerging Technology Risk

GCRI Executive Director Seth Baum participated via remote connection in a panel discussion with Christopher Nathan on “Cross-cutting Lessons About Risk in Emerging Technology” hosted by University of Warwick Integrative Synthetic Biology Centre on June 6.

Grants and Funding

GCRI has received an $85,000 grant from the Berkeley Existential Risk Initiative (BERI). We are grateful for this support.

Open Call for Advisees and Collaborators

GCRI is still accepting applications for its new advising and collaboration program. The response to our call for advisees and collaborators has been strong. Applicants should email GCRI Director of Communications Robert de Neufville (robert@gcrinstitute) a short description of their background and interests, what they hope to get out of their interaction with GCRI, the city they are based in, and either a resume/CV or a link to a professional website.

Planetary Defense

GCRI Executive Director Seth Baum has a paper forthcoming in Risk Analysis titled “Risk-Risk Tradeoff Analysis of Nuclear Explosives for Asteroid Deflection”. The paper uses risk-risk tradeoff analysis to assess whether the nuclear deflection of asteroids would result in a net increase or net decrease in risk.

Society for Risk Analysis Meeting

GCRI is hosting a session on Global Catastrophic Risks at the Society for Risk Analysis (SRA) 2019 annual meeting December 8-12 in Arlington, VA. SRA is the leading professional society for all types of risk. GCRI Director of Research Tony Barrett is chairing the session. GCRI Executive Director Seth Baum will contribute a talk titled “Global Catastrophic Risk Analysis” and GCRI Special Advisor for Government Affairs Jared Brown will contribute a talk titled “US Policy for Reducing Global Catastrophic Risk”.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.