The Far Future Argument for Confronting Catastrophic Threats to Humanity: Practical Significance and Alternatives

by | 14 October 2015

Download Preprint PDF

Certain major global catastrophes could cause permanent harm to humanity. A large body of scholarship makes a moral argument for confronting the threat of these catastrophes based on a concern for far future generations. The far future can be defined as anything beyond the next several millennia, including millions or billions of years from now, or even longer. Given the moral principle of caring about everyone equally, including people in the far future, confronting threats of permanent harm should be a major priority. The paper calls this the far future argument. 

Practical significance. The far future argument says we should try to confront catastrophic threats in order to benefit far future generations. Unfortunately, many people do not care much about far future generations and thus do not follow the far future argument. Fortunately, the practical task of confronting the threats does not always require caring about the far future. This paper assesses the practical significance of the far future argument by examining the extent to which confronting catastrophic threats to humanity requires caring about the far future. The paper surveys a range of threats according to several criteria. 

Catastrophe timing. If a catastrophe could occur in the near future, then confronting it will have near future benefits. The sooner the catastrophe could occur, the easier it may be to convince people to confront it. Most types of major global catastrophes could occur in either the near or far future, and some could only occur in the near future. This makes for almost all of the total risk. 

Co-benefits and mainstreaming. Co-benefits are other benefits of some action besides the target goal. Actions with the goal of confronting catastrophic threats can have other significant benefits. These other benefits can get people to confront the threats even if they don’t care about the threats, let alone about the far future. Mainstreaming means fitting actions into established goals and procedures. Actions to confront the threats can be mainstreamed into a range of established goals and procedures. This makes it easier for people to take the actions. Actions with large co-benefits that are well mainstreamed will often be the easiest actions to take; these make for a good starting point for confronting the threats. However, some actions require large sacrifice, such that the only people who will take the actions are those who support the far future argument. 

Far future as inspiration. Some people do support the far future argument, and more people can be inspired to do so. The far future can provide analytical inspiration, based on the quantitative significance of far future generations, as well as emotional inspiration, based on the beautiful future that could occur as long as no major catastrophe ruins it forever. 

Academic citation:
Seth D. Baum, 2015. The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives. Futures, vol. 72 (September), pages 86-96, DOI 10.1016/j.futures.2015.03.001.

Download Preprint PDFView in Futures

Image credit: NASA


This blog post was published on 28 July 2020 as part of a website overhaul and backdated to reflect the time of the publication of the work referenced here.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.