November Newsletter: Giving Tuesday

by | 22 November 2022

Dear friends,

GCRI would like to take the time to thank you for your continued support throughout 2022. Because of your help, we have been able to accomplish much throughout the year including publishing research, hosting another successful Advising and Collaboration Program, and much more (you’ll find our summary of 2022 accomplishments in the upcoming December newsletter). Whether you subscribe to our newsletter, participate in our annual Advising and Collaboration Program, or have the means to donate, we are grateful for your generosity. To continue supporting GCRI’s activities, please consider donating here.

Sincerely,
McKenna Fitzgerald
Deputy Director

Russia – Ukraine War

GCRI Executive Director Seth D. Baum recently participated in a string of media appearances to comment on the probability of nuclear war given the ongoing conflict between Russia and Ukraine. He was featured in publications such as the Harvard Kennedy Center’s Russia Matters, Times Radio, ABC, Newsweek, and the Miami Herald. A master list of Baum’s thoughts related to nuclear war risk can be found here.

Natural Global Catastrophic Risks

GCRI Executive Director Seth D. Baum recently published the article “Assessing natural global catastrophic risks” which discusses how natural hazards, such as volcanic eruptions, asteroid strikes, and climate change, pose significant threats to human civilization. In the paper he addresses the six natural threats, how they may be more severe given advances in civilization over time, and how the distinction between natural and artificial risks has become blurry.

You may also find Baum’s Twitter thread discussing the paper here.

Nonhuman Value

Research Associate Andrea Owe, Executive Director Seth D. Baum, and University of Vienna’s Prof. Mark Coekelbergh recently published the article “Nonhuman value: A survey of the intrinsic valuation of natural and artificial nonhuman entities“. The article discusses how natural nonhuman entities, such as ecosystems or nonhuman animals, and artificial nonhuman entities, such as art or technology, might hold inherent value, and that this should be considered in far reaching issues such as factory farming, climate change, and technological development.

National Academies of Sciences Workshop Proceedings

On December 17 and 21 of 2021, Executive Director Seth Baum delivered a remote talk called The challenges of addressing rare events and how to overcome them at the workshop Anticipating Rare Events of Major Significance, hosted by the US National Academies of Sciences, Engineering, and Medicine. The proceedings from the workshop can now be found online, and Baum’s remarks from the workshop can be found in Chapter 8, Active Prevention and Deterrence.

Deep Green Ethics

On August 31, Research Associate Andrea Owe delivered a remote talk to EA Nordics called Deep green ethics and catastrophic risk. Her talk can now be found online.

Ukraine and Nuclear War Risk

On October 22, Executive Director Seth Baum delivered a remote talk to EAGxVirtual Conference called Ukraine and nuclear war risk. His tail can now be found online.

Survey on Diversity and Inclusion in Existential Risk

CSER’s Academic Programme Manager, SJ Beard, is conducting a new research study to understand and improve diversity and inclusion in the community of Existential Risk Studies. Learn more information about the survey and participate here.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.