February Newsletter: Ukraine & Pluralism

by | 28 February 2022

Dear friends,

We at GCRI are watching the ongoing Russian invasion of Ukraine with great concern. In addition to the grave harm being inflicted on the Ukrainian people, this invasion also constitutes a large escalation of tensions between Russia and the West and a shooting war adjacent to several NATO countries. In our judgment, this increases the risk of US-Russia or NATO-Russia nuclear war and accompanying nuclear winter. Our hearts go out to the people of Ukraine who are enduring this tragic violence. For the sake of all parties, we hope that the conflict can be resolved quickly.

In GCRI news, we are announcing the publication of Greening the universe: The case for ecocentric space expansion, a new paper by GCRI Research Associate Andrea Owe. The paper presents Owe’s vision for global catastrophic risk and the long-term future. It calls for the near-term goal of avoiding global catastrophe as a prelude to the long-term goal of cultivating an ecologically flourishing cosmos, a “universe of weird and beautiful Earths”.

The ecocentric perspective of “Greening the universe” is distinctive. Most work on global catastrophic risk and the long-term future uses other perspectives, such as utilitarianism, in which the goal is to promote some conception of welfare or quality of life. We at GCRI believe it is important to consider a variety of perspectives on global catastrophic risk in order to better understand the topic and how to address it. We elaborate on this point in our new GCRI Statement on Pluralism in the Field of Global Catastrophic Risk.

Sincerely,
Seth Baum
Executive Director

AI Ethics and Environmentalism

On February 22, Research Associate Andrea Owe gave a talk to Chalmers AI Research Centre on AI ethics and environmentalism. Her seminar, Deepening AI ethics: AI and why we are in an environmental crisis can now be found online.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.