Long-Term Trajectories of Human Civilization

by | 8 August 2018

Download Preprint PDF

Society today needs greater attention to the long-term fate of human civilization. Important present-day decisions can affect what happens millions, billions, or trillions of years into the future. The long-term effects may be the most important factor for present-day decisions and must be taken into account. An international group of 14 scholars calls for the dedicated study of “long-term trajectories of human civilization” in order to understand long-term outcomes and inform decision-making. This new approach is presented in the academic journal Foresight, where the scholars have made an initial evaluation of potential long-term trajectories and their present-day societal importance.

“Human civilization could end up going in radically different directions, for better or for worse. What we do today could affect the outcome. It is vital that we understand possible long-term trajectories and set policy accordingly. The stakes are quite literally astronomical,” says lead author Dr. Seth Baum, Executive Director of the Global Catastrophic Risk Institute, a non-profit think tank in the US.

The group of scholars including Olle Häggström, Robin Hanson, Karin Kuhlemann, Anders Sandberg, and Roman Yampolskiy have identified four types of long-term trajectories: status quo trajectories, in which civilization stays about the same, catastrophe trajectories, in which civilization collapses, technological transformation trajectories, in which radical technology fundamentally changes civilization, and astronomical trajectories, in which civilization expands beyond our home planet.

The scholars find that status quo trajectories are unlikely to persist over the long-term. Whether humanity succumbs to catastrophe or achieves a more positive trajectory depends on what people do today.

“In order to succeed it is important to have a plan. Long-term success of humanity depends on our ability to foresee problems and to plan accordingly,” says co-author Roman Yampolskiy, Associate Professor of Computer Engineering and Computer Science at University of Louisville. “Unfortunately, very little research looks at the long-term prospects for human civilization. In this work, we identify some likely challenges to long-term human flourishing and analyze their potential impact. This is an important step toward successfully navigating such challenges and ensuring a thriving future for humanity.”

The scholars emphasize the enormous scales of the long-term future. Depending on one’s ethical perspective, the long-term trajectories of human civilization can be a crucial factor in present-day decision-making.

“The future is potentially exceedingly vast and long,” says co-author Anders Sandberg, Senior Research Fellow at the Future of Humanity Institute at University of Oxford. “We are in a sense at the dawn of history, which is a surprisingly powerful position. Our choices – or lack of decisions – will strongly shape which trajectory humanity will follow. Understanding what possible trajectories there are and what value they hold is the first step towards formulating strategies for our species.”

UPDATE (March 11, 2019): The paper is featured in a BBC article The perils of short-termism: Civilisation’s greatest threat.

UPDATE (December 7, 2020): The paper won the 2020 Emerald Publishing Literati Award for Outstanding Paper.

Academic citation:
Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy, 2019. Long-Term Trajectories of Human Civilization. Foresight, vol. 21, no. 1, pages 53-83, DOI 10.1108/FS-04-2018-0037.

Download Preprint PDFView in Foresight

For more information please contact:
Dr. Seth Baum at seth@gcrinstitute.org
Dr. Anders Sandberg at anders.sandberg@philosophy.ox.ac.uk
Dr. Roman Yampolskiy at roman.yampolskiy@louisville.edu

Image credits:
Times Square: Matias Garabedian
Cemetery: Scott Rodgerson
Future city: JCT 600
Space colony: NASA Ames Research Center

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.