Uncertain Human Consequences in Asteroid Risk Analysis and the Global Catastrophe Threshold

by | 17 August 2018

Download Preprint PDF

Asteroid collision is probably the most well-understood global catastrophic risk. This paper shows that it’s not so well understood after all, due to uncertainty in the human consequences. This finding matters both for asteroid risk and for the wider study of global catastrophic risk. If asteroid risk is not well understood, then neither are other risks such as nuclear war and pandemics.

In addition to our understanding of the risks, two other important issues are at stake. One is policy for asteroids (and likewise for other risks). The paper argues that greater uncertainty about the human consequences demands more aggressive asteroid risk reduction, in order to err on the safe side of avoiding catastrophe. Also at stake are which risks to prioritize, including extinction risks vs. catastrophes that leave survivors. Uncertainty in the fate of survivors can be a reason to prioritize sub-extinction risks similarly to extinction risks. This issue is also discussed in the recent paper Long-Term Trajectories of Human Civilization.

The paper includes some discussion of the prospect of nuclear war triggered by asteroid collisions. Asteroid collisions produce explosions that could be mistaken for nuclear attack. Many similar nuclear war false alarms have occurred, as documented in the GCRI paper A Model for the Probability of Nuclear War. The prospect of asteroid collision false alarm was in the news recently due to an asteroid explosion near the US Air Force’s Thule Air Base in Greenland. This issue has important policy implications for both asteroid and military communities, as is discussed in the paper.

Academic citation:
Baum, Seth D., 2018. Uncertain human consequences in asteroid risk analysis and the global catastrophe threshold. Natural Hazards, DOI 10.1007/s11069-018-3419-4.

Download Preprint PDFView in Natural HazardsView in ReadCube

Image credit: NASA/JPL

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.