December Newsletter: Year in Review

by | 19 December 2022

Dear friends,

This year has been an important year for global catastrophic risk. The Russian invasion of Ukraine, a multitude of extreme weather events, the release of new AI systems, and the ongoing COVID-19 pandemic have either threatened global catastrophe or raised issues related to global catastrophic risk. Additionally, the recent collapse of the cryptocurrency company FTX has brought disruption and scrutiny to the field global catastrophic risk due to FTX’s philanthropic connections to the field.

As explained in this year’s Annual Report, these events have prompted much reflection for us at GCRI. We now believe that, moving forward, we should be more active in public conversations related to global catastrophic risk, especially to provide perspective on current events and promote constructive solutions for reducing the risk. Despite growth in the field of global catastrophic risk, there remain relatively few public voices. Given our senior status in the field, our interdisciplinary expertise, and our comfort in the public sphere, this is a role we can play well.

I look forward to sharing more about our public outreach with you in the new year. We also welcome inquiries of interest in this work and support for it. Please consider GCRI in your donations.

In the new year, Research Associate Andrea Owe will be leaving the GCRI team. Andrea has been an excellent contributor, working with us primarily on fundamental ethical issues. Her work has brought novel and important perspectives to the study of global catastrophic risk. She will be greatly missed, and we wish her the best in her future endeavors.

Sincerely,
Seth Baum
Executive Director

Advising and Collaboration Program Summary

In May, GCRI put out an open call for people interested in seeking our advice or in collaborating on projects with us. The open call was a continuation of the Advising and Collaboration Programs GCRI conducted in 2019, 2020, and 2021. Our 2022 Advising and Collaboration Program was quite successful and allowed us to continue networking and meeting others in the field of global catastrophic risk.

2022 GCRI Fellowship Program

GCRI is pleased to announce its 2022 GCRI Fellowship Program. The program recognizes a select group of four GCRI Fellows who made exceptional contributions to addressing global catastrophic risk in collaboration with GCRI in 2022. The 2022 GCRI Fellows range from undergraduates to senior professionals, and made contributions across a range of disciplines including nuclear war risk, misinformation, public health, and biosecurity.

Ethics of Funding

In response to recent events involving FTX and their philanthropic arm the FTX Future Fund, GCRI has released a statement on the ethics of funding sources. You can read the statement here.

Mastodon

Mastodon is a free, open-source social media platform, akin to Twitter, that has recently risen in popularity. On December 12, Executive Director Seth Baum participated in a discussion called Can thriving online include thriving on Mastodon? with environmental journalist Andy Revkin of the Columbia Climate School.

GCRI and GCRI Executive Director Seth Baum can now be found on Mastodon.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.