Lessons for Artificial Intelligence from Other Global Risks

by | 21 November 2019

Download Preprint PDF

It has become clear in recent years that AI poses important global risks. The study of AI risk is relatively new, but it can potentially learn a lot from the study of similar, better-studied risks. GCRI’s new paper applies to the study of AI risk lessons from four other risks: biotechnology, nuclear weapons, global warming, and asteroids. The paper is co-authored by GCRI’s Seth Baum, Robert de Neufville, and Tony Barrett, along with GCRI Senior Advisor Gary Ackerman. The paper will be published in a new CRC Press collection edited by Maurizio Tinnirello titled The Global Politics of Artificial Intelligence.

The study of each of the four other risks contains valuable insights for the study of AI risk. Biotechnology and AI are both risky technologies with many beneficial applications. Episodes like the 1975 Asilomar Conference on Recombinant DNA Molecules and the ongoing debate over gain-of-function research show how controversies about the development and use of risky technologies could play out. Nuclear weapons and AI are both potentially of paramount strategic importance to major military powers. The initial race to build nuclear weapons shows what a race to build AI could be like. Global warming and AI risk are both in part the product of the profit-seeking of powerful global corporations. The fossil fuel industry’s attempts to downplay the dangers of global warming show one path corporate AI development could take. Finally, asteroid risk and AI risk are both risks of the highest severity. The history of asteroid risk management shows that policy makers can learn to take even risks that have a high “giggle factor” seriously.

The paper draws several important overarching lessons for AI from the four global risks it surveys. First, the extreme severity of global risks may not be sufficient to motivate action to reduce the risks. Second, how people perceive global risks is influenced by both their incentives and their cultural and intellectual orientations. These influences may be especially strong when the size of the risk is uncertain. Third, the success of efforts to address global risks often depends on whether they have the support of people who stand to lose from those efforts. Fourth, the risks themselves and efforts to address them are often heavily shaped by broader social and political conditions.

The paper also demonstrates the value of learning lessons for global catastrophic risk from other risks. This is one reason why GCRI has always emphasized studying multiple global catastrophic risks. Another reason is that study multiple risk allows cross-risk evaluation and prioritization.

Academic citation:
Seth D. Baum, Robert de Neufville, Anthony M. Barrett, and Gary Ackerman, 2022. Lessons for artificial intelligence from other global risks. In Maurizio Tinnirello (editor), The Global Politics of Artificial Intelligence. Boca Raton: CRC Press, pages 103-131.

Download Preprint PDFView The Global Politics of Artificial Intelligence

Image credits:
Computer chip: Aler Kiv
Influenza virus: US Centers for Disease Control and Prevention
Nuclear weapon explosion: US National Nuclear Security Administration Nevada Field Office
Asteroid: NASA
Smoke stacks: Frank J. Aleksandrowicz

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.