Climate Change, Uncertainty, and Global Catastrophic Risk

by | 30 July 2024

Download PDF Preprint

Is climate change a global catastrophic risk? Warming temperatures are already causing a variety harms around the world, some quite severe, and they project to worsen as temperatures increase. However, despite the massive body of research on climate change, the potential for extreme global harms remains highly uncertain and controversial. This paper addresses the question by examining the theoretical definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that, given current knowledge, yes, climate change should be classified as a global catastrophic risk, and it may be a significant global catastrophic risk.

Whether something classifies as a global catastrophic risk depends on how global catastrophic risk is defined. There have been many definitions of global catastrophic risk, but all of them share one essential feature: they all set some lower threshold for the severity of global catastrophe. If an event would cause harm that exceeds this threshold, then it’s a global catastrophe. The threshold has been set at, for example, one billion deaths, the death of 10% or 25% of the global human population, or, in prior GCRI research, a large and damaging change to the state of the global human system. Some definitions, such as this and this, set the threshold to a level in which recovery from the catastrophe could not occur, but as the paper explains, this is inappropriate because it excludes risks in which the outcomes are uncertain.

The paper uses the graphic shown above to illustrate why uncertainty about outcomes makes risks more of a global catastrophic risk. Graphs (a) and (b) are wider and therefore more uncertain than (c) and (d), but they have the same average severity. Both (a) and (b) have a significant probability of exceeding the global catastrophe threshold, shown in red and orange, whereas (c) does not. Graph (a) shows that even if a risk will probably not end up in global catastrophe, if there is a lot of uncertainty, it may still be a global catastrophic risk.

Climate change is more like (a) and (b). It’s clear that climate change is causing a lot of harm, but it’s very uncertain how severe the harms will end up being over upcoming decades and centuries. It’s uncertain how much the climate will end up changing and how successfully humanity will adapt to the climatic changes.

Typical climate change scenarios involve temperature increases of a few degrees, but some global catastrophic risk research has considered warming of 20ºC. In such scenarios, much of the planet would be uninhabitable: it would be too warm for human bodies to survive outdoors for at least part of the year. Even then, it may be possible for some humans survive in a few coastal or high-altitude regions, or maybe in giant air-conditioned structures.

Conversely, more probable scenarios of a few degrees temperature increase could still result in global catastrophe. Moderate climate change could cause catastrophic global harm via impacts on food security, violent conflict, infectious diseases, geoengineering, and/or other factors. In these scenarios, climate change may be only one part of a broader system of events that result in global catastrophe, but it may nonetheless be the case that without climate change, global catastrophe would not have occurred.

Nuclear winter is similar. The most extreme nuclear winter scenarios involve massive nuclear wars and unusually severe disruptions, but even then, it may be possible for some humans to survive. Meanwhile, more moderate scenarios could cause complex effects that may result in global catastrophe. Therefore, a similar case can be made for climate change and nuclear winter as global catastrophic risks. (Note: nuclear winter involves temperature declines and is therefore a type of climate change; the paper follows the standard usage of “climate change” meaning anthropogenic global warming.)

To an extent, the status of climate change as a global catastrophic risk doesn’t really matter. It’s still the case that climate change poses significant risks and likewise that it’s worth substantial effort to address. It’s good to reduce greenhouse gas emissions regardless of the implications for global catastrophic risk. However, climate change being a global catastrophic risk strengthens the case for action. Furthermore, it justifies efforts from the field of global catastrophic risk. For starters, the field can contribute to the study of extreme climate change scenarios, which remain understudied.

The paper extends GCRI’s research on risk and decision analysis. It builds on several prior GCRI publications. Double catastrophe: Intermittent stratospheric geoengineering induced by societal collapse, The great downside dilemma for risky emerging technologies, and Evaluating future nanotechnology: The net societal impacts of atomically precise manufacturing discuss tradeoffs between climate change and risky emerging technologies. the status of climate change as a global catastrophic risk affects how this tradeoff is made. Integrating the planetary boundaries and global catastrophic risk paradigms, Long-term trajectories of human civilization, and Quantifying the probability of existential catastrophe: A reply to Beard et al. discuss the definition of global catastrophic risk and related theoretical matters. Resilience to global food supply catastrophes discusses humanity’s ability to adapt to and survive catastrophes like nuclear winter. See also GCRI’s review of the book The Precipice, which critiques the book’s more dismissive position on climate change.

Academic citation:
Baum, Seth D., 2024. Climate change, uncertainty, and global catastrophic risk. Futures, vol. 162 (September), article 103432, DOI 10.1016/j.futures.2024.103432.

Download PDF PreprintView in Risk Analysis

Image credit: Seth Baum

Author

Recent Publications

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

AI ethics concepts like value alignment propose something similar to democracy, aggregating individual values into a social choice. This paper, published in the journal AI and Ethics, explores the potential for AI systems to be manipulated in ways analogous to sham elections in authoritarian regimes.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

AI ethics concepts like value alignment propose something similar to democracy, aggregating individual values into a social choice. This paper, published in the journal AI and Ethics, explores the potential for AI systems to be manipulated in ways analogous to sham elections in authoritarian regimes.