August Newsletter: Long-Term Trajectories

by | 15 August 2018

Dear friends,

This month I am proud to announce a new paper, “Long-Term Trajectories of Human Civilization“. The paper calls for attention to the fate of human civilization over time scales of millions, billions, or trillions of years into the future. While most attention goes to nearer-term phenomena, the long-term can be profoundly important to present-day decision-making. For example, one major issue the paper examines is the fate of global catastrophe survivors. How well they fare is a central factor in whether people today should focus on risks of human extinction, risks of sub-extinction global catastrophes, or other issues.

The paper was a large group effort. I am the lead author out of 14 total co-authors, including GCRI affiliates Matthijs Maas and Roman Yampolskiy. The paper is based on a workshop I led at last year’s Workshop on Existential Risk to Humanity at Chalmers University of Technology in Gothenburg, Sweden, organized by Olle Häggström. I’d like to thank all of the co-authors and other workshop participants for making this a much better paper than I could have written on my own.

For more information, please read our announcement on the GCRI blog and read the paper.

Sincerely,
Seth Baum, Executive Director

General Risk

GCRI Executive Director Seth Baum was the lead author of a paper forthcoming in Foresight—with an international group of 13 other scholars including Stuart Armstrong, Olle Häggström, Robin Hanson, Karin Kuhlemann, Anders Sandberg, and GCRI associates Roman Yampolskiy and Matthijs Maas—on the “Long-Term Trajectories of Human Civilization.” They identify four types of potential long-term trajectories: status quo trajectories, in which civilization stays about the same, catastrophe trajectories, in which civilization collapses, technological transformation trajectories, in which radical technology fundamentally changes civilization, and astronomical trajectories, in which civilization expands beyond Earth.

GCRI Executive Director Seth Baum gave a talk “An Evening with the Global Catastrophic Risk Institute” for the Effective Altruism NYC group on August 9.

Artificial Intelligence

GCRI Executive Director Seth Baum is giving a talk titled “Introduction to Artificial Intelligence Research” at Tech2025 on August 14 in New York City.

GCRI Associate Roman Yampolskiy gave the keynote address at the Techno Security & Digital Forensics conference in Myrtle Beach, SC on June 4. Yampolskiy was also interviewed about “AI Safety, Possible Minds, and Simulated Worlds” on the Future of Life podcast and about “Artificial Intelligence, Risk, and Alignment” on Economics Detective Radio.

GCRI Associate Dave Denkenberger co-authored a paper with Alexey Turchin titled “Classification of Global Solutions for the AI Safety Problem” that won a top prize in GoodAI’s General AI Challenge.

Asteroid Risk

GCRI Executive Director Seth Baum’s paper on “Uncertain Human Consequences in Asteroid Risk Analysis and the Global Catastrophe Threshold” is forthcoming in Natural Hazards.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.