Nick Beckstead Gives Lecture on Shaping the Far Future

by | 27 August 2013

On Thursday 15 August, GCRI hosted an online lecture by Nick Beckstead entitled, ‘On the Overwhelming Importance of Shaping the Far Future’ (see the pre-lecture announcement). Beckstead recently finished a PhD at the Philosophy Department at Rutgers University, where he focused on normative philosophy and normative ethics, applied ethics, and decision theory. He is currently a Research Fellow at the Oxford University Future of Humanity Institute. The lecture is based on Beckstead’s dissertation of the same title, which focuses on the topics of existential risk, population ethics, and decision theory. Beckstead’s talk notes are also online here.

As the title suggests, this lecture is about the value of shaping the far future – even millions or billions of years into the future, or more. Beckstead is working within the philosophical and social concept of effective altruism, which asks, “how can we most effectively make a positive impact on the world?”, or “How can we do what is best?” Beckstead’s answer: “Do what is best for future generations.” Helping people alive today matters primarily in an instrumental way that depends on the extent to which helping people (nature, animals, Earth) now will positively or negatively affect future generations.

The emphasis on future generations is based on the assumption that civilization has a significant probability of (1) lasting a very long time, (2) becoming very large, and/or (3) becoming very good (or very bad) per unit time.

Space colonization is one way in which our civilization could last a very long time and become very large. Whether space colonization is possible remains an open question. Beckstead cites recent research that carefully analyzes the probability and feasibility of such a scenario, estimating that humanity could colonize about 4 billion galaxies [1]. Further, some stars will burn for another 100 trillion years, making it a possibility that future generations might be able to sustain life in other solar systems for another 100 trillion years [2].

Artificial minds might also lead to a large civilization. Beckstead observes that while some people doubt that there will be conscious machine intelligences, many AI researchers, philosophers, and futurists tend to think that advanced machine intelligence is possible and that it could be conscious [3], and that if we create machine intelligences modeled on human brains, Earth would likely sustain very large populations of these machine intelligences [4].

But even without space colonization or artificial minds, ordinary human populations could remain on Earth for a long time: several hundred million or several billion years [5]. This means that anything we do that affects the long-term trajectory of human civilization could have a very large impact.

This emphasis on long timescales can fundamentally change the value of helping people, animals, or nature. Beckstead provides this example:

“…making someone 10% happier in a rich country may, on average, have much more significant environmental, technological, and political consequences–whether you think they are positive or negative–than helping someone in a similar way in a poor country. But you might think the effects were approximately equal if you were just focusing on doing what is best for people alive today or what would be best for the next few generations.”

Beckstead also uses the example of animals to illuminate this way of thinking. He explains that, traditionally, helping animals is predicated on reducing animal suffering. Beckstead believes that preventing animal suffering is important in its own right, but has limited long-run consequences because, unlike humans, animals do not make decisions which affect civilization in substantial ways. Thus, if helping animals is to have substantial positive impacts, it must do so by affecting people—such as through affecting their values, reducing climate change [6], or making food production more efficient. This perspective is anthropocentric in practice because of the pivotal influence humans are likely to play in long-run outcomes.

At an even broader level, if we accept that doing what is best approximately reduces to putting humanity on the best future development trajectory we can, then we must also promote actions that substantially reduce the chances of a global catastrophe, particularly one that society cannot recover from. We should also consider the loss in value from a non-permanent collapse of civilization, which requires analysis of the adaptation and recovery potential of society [7].

Participants raised several discussion points throughout lecture. As one participant pointed out, the default state for most animal species (including humans) is Malthusian, and that even though the industrial revolution has propelled humanity out of subsistence levels, there is no guarantee that this trend will continue. How then are we to presume that humanity will continually move in this upward trend? There is also the question of governance. Arguably, a large and ever-expanding civilization addressing global issues may require some kind of “singleton” world order that consists of a single decision-making agency at the highest level [8]. As one can imagine, there are large benefits and risks to global governance, all of which warrant their own discussion. Regardless, as one participant makes clear, if what we care about is the long-term future, then we want to think about these issues and influence society accordingly.

But, then, what about the end of the universe? As another participant brought to light, if we are wiling to consider such extremely long time scales, in the order of billions and trillions of years, then don’t we also have to consider that physicists expect the universe to achieve something close to a “heat death“? In this scenario, the ultimate fate of the universe renders human life moot in comparison to “the grand scheme of things.” It follows, then, that it does not matter that we try to achieve maximum value on intermediate timescales. How can anything, including us, have value if it will all disappear?

In response, Beckstead cites Thomas Nagel’s piece The Absurd [9]. Nagel wrote:

“Yet humans have the special capacity to step back and survey themselves, and the lives to which they are committed… Without developing the illusion that they are able to escape from their highly specific and idiosyncratic position, they can view it sub specie aeternitatis [under the aspect of eternity] —and the view is at once sobering and comical…If sub specie aeternitatis there is no reason to believe that anything matters, then that does not matter either, and we can approach our absurd lives with irony instead of heroism or despair.”

Like Nagel, Beckstead argues for the individual case, that, yes, I will die. But while I live, my life could go good or badly, be full of joys or sorrows, and I can affect how I live. These individual moments have value regardless of the vastness of the universe. When these moments are aggregated together they can create a “good” period in history, or in multiple periods.

Participants also raised a related issue about tradeoffs between the interests of present and future generations. If future generations are so much more important, then does this mean that people today should sacrifice greatly for the future [10]? Beckstead did not commit to supporting such massive sacrifice. However, there may be some ways to help both present and future generations, with no tradeoff. This could include creating a broadly more functional and resilient society now, e.g. via improving people’s education, improving economic prosperity, creative effective political institutions, reducing CO2 emissions, reducing government corruption, reducing nuclear proliferation, and creating more liberal democracies, to name a few. Doing this helps people today and could also help future generations by making humanity more capable of addressing global challenges, including bouncing back from global catastrophe.

One personal reflection that emerged from the lecture for this author was the idea of reciprocity. If one accepts the fundamental truth that present generations inherit everything from previous generations, that we too are eventually ancestors, and thus that we too influence what is passed down to the next generation ad infinitum. Acknowledging this relationship with distant generations from both the past and the far future forces us to acknowledge our inheritance and, as Beckstead highlights, to consider how our decisions will impact future generation’s resilience in the far future. Present generations are constantly brokering relationships with technology and with the natural world in order to allocate resources. Over time, these relationships have moved away from relationships with predators, food, tools, and clothing to primarily computers, phones, and other forms of technology. This may change again with time. Regardless, we have a history of increasingly acquiring or manifesting resources. The point is that we have mechanisms and principles (though not necessarily sufficient) to cope with scarcity and abundance on shorter time scales (for this generation and perhaps the next several generations). We do not have, however, any mechanisms and very few principles which consider reciprocity and resource allocation for the 100th or 100,000th generation.

And thus, Beckstead leaves us with more questions than answers about how we will broker a new relationship with future generations. We must consider what factors will make future generations more or less likely to properly handle challenges and opportunities that could affect very long-term consequences, one that considers in earnest our influence on the ability of humanity to survive.

Here is the full abstract of the talk:

In this talk I will argue for two claims. The Main Claim is: As a rough generalization, when seeking policies or projects with the potential to have large expected positive impact, the expected impact of a policy or project is primarily a function of that project or policy’s expected effects the very long-term trajectory of civilization. The basic argument for this Main Claim depends on the further claims that civilization has a reasonable probability of eventually becoming very large, very good, and/or lasting a very long time, and if this is possible then the possibility is so important that my Main Claim is true. The Secondary Claim is: If the Main Claim is true, it significantly changes how we should evaluate projects and policies which aim to have a large expected positive impact. I will defend this Secondary Claim more tentatively because I believe it is less robustly supported than the Main Claim.

In the remaining time I will discuss objections to the arguments and claims presented; which specific topics I discuss will be determined by audience preference. Some possible objections which we could discuss include: the objection that very long-term considerations are largely irrelevant because we should primarily care about the interests of people alive today, the objection that very long-term considerations are much less relevant than I claim because future generations have significantly diminishing marginal value, the objection that proper use of discount rates makes very long-term considerations largely irrelevant, the objection that relying on arguments like this is akin to accepting something like Pascal’s Wager because the probability of changing the very long-term future is small and hard to estimate, and the objection that we are generally so bad at thinking about very long-term considerations that it would be better if we generally continued to focus on short-term and medium-term considerations.

The presentation was hosted online via Skype, with online notes shown on Workflowy. Attendees included: Miles Brundage, a PhD student in Human and Social Dimensions of Science and Technology at Arizona State University; Luke Haqq, a PhD student in Jurisprudence at UC Berkeley; Jacob Haqq-Misra, a Research Scientist with the Blue Marble Space Institute of Science and a GCRI Research Associate; Jason Ketola, who works in online transaction fraud detection; Jess Riedel, a post-doctoral researcher in quantum information at IBM Research; Christian Tarsney, a PhD student in philosophy at the University of Maryland; and GCRI’s Seth Baum, Kaitlin Butler, and Grant Wilson.

[1] Armstrong, Stuart, and Anders Sandberg. (2013). “Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox”. Acta Astronautica, 89, 1-13.

[2] Adams, F. C. (2008). Long-term astrophysical processes. In Bostrom, N. and Cirkovic, M. M., editors, Global Catastrophic Risks, pages 33 47. Oxford University Press.

[3] Eden, A. H., Moor, J. H., Søraker, J. H., & Steinhart, E. (2012). Singularity Hypotheses. Springer.

[4] Hanson, Robin. (1995). “If Uploads Come First: The crack of a future dawn”. Extropy, 6(2), 10-15.

[5] Schröder, K.-P., and Robert Connon Smith. (2008). Distant future of the Sun and Earth revisited. Monthly Notices of the Royal Astronomical Society, 386(1), 155–163.

[6] Food and Agriculture Organization of the United Nations. Livestock’s long shadow.

[7] Maher, Timothy M. Jr. and Seth D. Baum (2013). Adaptation to and recovery from global catastrophe. Sustainability, 5(4), 1461-1479.

[8] Bostrom, Nick. (2006). “What is a Singleton?” Linguistics and Philosophical Investigations, 5(2), 48-54.

[9] Nagel, Thomas. (1971). “The Absurd”. The Journal of Philosophy, 68(20), 716-727.

[10] For similar ideas on present generation vs. future generation tradeoffs and sacrifices, see for example: Chichilnisky, Graciela. (2009). “Avoiding extinction: Equal treatment of the present and the future”. Economics: The Open-Access, Open-Assessment E-Journal, 3 (2009-32).

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.