The Role of Risk and Decision Analysis in Global Catastrophic Risk Reduction

by | 26 November 2018

This post provides some general discussion on how we at GCRI currently view risk and decision analysis. It has long been a major theme for our work, but it is somewhat controversial in our own minds. This post shares our current thinking in order to prompt wider discussion of these important but challenging research tools.

GCRI is a leader in the use of risk and decision analysis to understand global catastrophic risk. Risk analysis is the process of characterizing and perhaps quantifying the probabilities and severities of potential bad events, while decision analysis does the same for the effects of decision options on risks and other factors. Risk and decision analysis have played important roles in reducing many important risks.

My colleagues and I at GCRI have been thinking about the value of applying risk and decision analysis to global catastrophic risk since our inception. GCRI was born in the world of risk analysis. Our co-founders, Tony Barrett and myself, met at the annual conference of the Society for Risk Analysis, the leading academic and professional society for most aspects of risk (the financial risk community has separate societies). Risk and decision analysis are central to our research agenda.

As a matter of principle, I believe that global catastrophic risk can and should be analyzed, and that risk and decision analysis should play a major role in guiding risk-reduction decisions. In practice, however, global catastrophic risks and the actions that could reduce them are difficult to analyze, especially with a high degree of rigor. It’s often not clear how global catastrophic risks should be analyzed or whether it’s worth the effort to do the analysis. Furthermore, decision makers don’t always want or need risk or decision analysis to make risk-reduction decisions.

The case for risk and decision analysis

Risk and decision analysis aim to provide a better understanding of things that can go wrong and what could be done to make them go better. Risk and decision analyses are often, but not always, quantitative. In some cases, simply documenting the ways things can go wrong is enough to figure out what to do. But in other cases quantitative analysis is invaluable.

Take the case of nuclear power. Many factors go into decisions of when and how to build nuclear power plants, including the need for more electricity, the cost of a plant, the other options available, local health and environmental risks, and public preferences. But if we set all those aside and just focus on nuclear power’s impact on global catastrophic risk, nuclear power has two main effects: it reduces greenhouse gas emissions and thus decreases climate change risk and it increases the chance of nuclear weapons proliferation and thus increases nuclear war risk. (There are some who argue that nuclear proliferation decreases risk by expanding access to nuclear deterrence, but to keep things simple, we’ll set that view aside here.)

Nuclear power is a case of what’s called a “risk-risk tradeoff”, when some action reduces one risk while increasing another. All else being equal, it would be better to do whatever results in the least total risk. Risk and decision analysis allow us to figure whether nuclear power decreases climate change risk more than it increases nuclear war risk.

Of course, other alternatives may allow us to avoid this tradeoff. In this case, energy conservation and renewable energy can both reduce climate change risk without increasing nuclear war risk. If these alternatives are clearly preferable, no further risk or decision analysis is needed. Simply characterizing the nature of the risk is enough—there’s no need for quantitative analysis.

However, if nuclear power may be the better option, we need quantitative analysis to evaluate the risk-risk tradeoff. Specifically, we need to quantify the climate change and nuclear war risks, as well as the size of the effect of nuclear power on both risks. Only then can we determine whether a nuclear power plant would be worth building.

Many other decisions benefit from risk characterization and quantification. For example, should nuclear weapons be used to deflect asteroids away from Earth, even if this could increase the risk of nuclear war? Should we place dust particles in the stratosphere to block sunlight, decreasing climate change risk as long as we don’t stop but increasing the risk if we are interrupted? Should we develop advanced artificial intelligence or nanotechnology that could help us reduce a range of risks but could also create new risks?

These examples all involve risk-risk tradeoffs. Another important type of tradeoff occurs when we allocate scarce resources such as time, attention, and money. Should a philanthropist donate to an organization working on nuclear or biological weapons? Which risks should policymakers put on their agenda? Sometimes, these questions can be answered with a qualitative characterization of the risks, but often we need numbers.

But good risk and decision analysis is difficult

Ideally, we would have high-quality analyses of all global catastrophic risks and risk-reducing decision options. Unfortunately, good risk analysis is not easy.

Good risk analysis accurately describes the key details of and uncertainties about the risk. Quantitative analysis should be reliable, so that if we say, for example, that X has a 70% probability, then X should tend to occur seven out of ten times. It should be done in a way that everyone with access to the same information would agree with, and it should be communicated clearly and effectively.

Good risk analysis of global catastrophic risks is particularly challenging. If you want to know the risk of dying in car crash, there’s a lot of historical data on car crashes. However, a global catastrophe severe enough to threaten global civilization, the type of catastrophe GCRI focuses on, would be unprecedented. The empirical data that’s available for car crashes simply doesn’t exist for global catastrophes.

We can analyze global catastrophic risks using other types of information. These include underlying processes, such as the physics of asteroid collisions, and historical incidents in which global catastrophe may have nearly occurred, such as the Cuban missile crisis. One difficulty is that, for global catastrophic risks, the amount of relevant information can be extremely large. For example, our analysis of the probability of nuclear war includes 60 complex historical incidents and uses a range of international security theories. Our analysis of the severity of nuclear war draws on a mix of topics in physics, civil engineering, toxicology, economics, sociology, meteorology, and more—and we’re really only able to scratch the surface.

Another difficulty is that there are inevitably gaps in this type of analysis. Historical near-miss incidents tell us about the probability of getting partway to a catastrophe, but not about the probability of a catastrophe ultimately occurring. Likewise, we can make only indirect inferences about how severe a catastrophe could be in the absence of direct observations of the same type of catastrophes. We might know a lot about the economics of supply chain disruptions, for example, but still not be able to extrapolate to the disruptions that would be caused by the loss of several major cities in a nuclear war.

Progress on all of these various aspects of global catastrophic risks takes skill and time. Determining when risk analysis is worth the effort is itself a difficult question to answer rigorously. One safe conclusion is that detailed risk and decision analysis should generally not be done when it’s not needed to guide decisions. For example, we don’t need detailed analysis to tell us that energy conservation effects nuclear war risk less than nuclear power does. But when risk and decision analysis could guide decisions, we must assess when to do the analysis, and in how much detail. It is possible to do formal analysis of what analysis to do—this is analysis of the “value of information” and related matters—but this adds an extra layer of complexity to the overall project. Ultimately, we must use our judgment on which analyses to pursue.

And risk and decision analyses are not always used

Risk and decision analysis itself is only one part of the process. The other part is using the analysis to make better decisions. Whereas the analysis is largely an intellectual challenge, putting the analysis to use requires promoting ideas and navigating the halls of power.

Ideally, any time we face an important decision about global catastrophic risk, decision makers would commission a risk or decision analysis to help them determine what to do. Suffice it to say, this doesn’t always happen. Sometimes decisions must be made too quickly for an analysis to be conducted. Sometimes there isn’t enough money to pay for the analysis. Sometimes decision makers don’t realize a risk or decision analysis could be done or simply don’t want one done.

The truth is, not everyone cares about global catastrophic risk the way those who focus on it do. Even when decision makers are personally concerned about global catastrophic risk, their institutional position may constrain them from taking action. For example, government agencies are often expected to work on issues that are local, near-term, and high-probability, such that global catastrophic risk falls outside the scope of their agenda.

We could do the analysis anyway, but if no one is going to put our analysis to use, we may just be wasting our time and our funding. In my experience, those of us who care about global catastrophic risk often spend too much time on analysis (whether it be risk and decision analysis or other types of research) and not enough time engaging decision makers. I have certainly been guilty of this myself.

Part of the problem is institutional: funders and employers (especially universities) often incentivize researchers to publish as much research as we can. That leaves little time for outreach and engagement and means there’s a lot of research that doesn’t really get used. This institutional problem is one reason why we chose to not set up GCRI at any university. Being independent gives us the flexibility to focus on what is most important. Unfortunately, much of our funding has been strictly for research and we have done less outreach and engagement than we could have. This is an area in which GCRI is working to improve.

To be useful, risk and decision analysis should be done in conversation with relevant decision makers. Initial analysis can help us figure out which decision makers to approach and what to say to them when we do. Likewise, the perspectives of decision makers can help us see what which analyses would be most useful. But the need to engage with decision makers is yet another thing that makes risk analysis more challenging and time consuming.

So, what to do

As far as I can tell, the best approach is to carefully consider whether any given risk or decision analysis would produce enough insight to be worth the time it would take. We can’t fully know these things in advance of actually doing the analysis, so it is important to have a general understanding of the amount of time risk and decision analyses take and the insights they produce. This can come from in-house experience conducting risk and decision analysis and from the experiences of the wider risk analysis community. While the analysis of global catastrophic risks pose some distinctive challenges, there is nonetheless much to be learned from the analysis of other risks.

It can also be helpful to conduct risk and decision analyses in stages. Instead of committing from the start to a comprehensive analysis of a particular risk or decision, it may be better to commit only to a smaller, simpler analysis. The initial analysis can provide valuable insights on how productive this type of analysis process is for that particular risk. Often, it’s hard to tell how feasible the analysis is until it’s in progress. This probably holds for a wide range of risks, and the global catastrophic risks are no exception.

One important principle is to try to ensure that any risk or decision analysis has an appropriate audience. We should certainly try to avoid wasting time and money on analysis that won’t be used, though it can be hard to say in advance what would be useful. We should probably also spend more time than we do engaging with decision makers, understanding their perspectives, and giving them the kind of input they will actually use, even if it is not always risk analysis per se.

If there is one thing that we can say with confidence from the analysis of global catastrophic risk, it is that it is far from clear which risks and which risk-reduction interventions are the most important. Some risk and interventions do look more important than others. But it is important not to latch on to any one specific risk or intervention to the exclusion of others. There’s just too much that we don’t know and maybe even can’t know. We need to have the humility to realize we may have faulty intuitions.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.