December Newsletter: Thank You & Happy New Year

by | 15 December 2021

Dear friends,

As this year comes to a close, we at GCRI would like to formally express our gratitude for your continued support. Support comes in many forms, and we recognize that not everyone has the ability to support us financially. However, we are lucky enough to receive a variety of other helpful forms of support, such as when someone shares our work, reads our research papers, collaborates with us on projects, introduces us to their colleagues, or just finds time to connect with us. We’ve been able to accomplish much this year and have big plans for next year (ask Executive Director Seth Baum about our plans during his upcoming Ask Me Anything on the Effective Altruism Forum).We hope to make them a reality with your help. Every like, share, email, and cent make a difference. We thank you all for your continued support as we head into 2022. 

We would also like to thank departing Director of Communications Robert de Neufville for his invaluable contributions to GCRI over the years. De Neufville played an essential role in GCRI research (here and here) and operations, and we wish him luck in his future endeavors.

We wish you all happy holidays and a wonderful, healthy new year.

Sincerely,
McKenna Fitzgerald
Deputy Director

GCRI Receives $200,000 for Work on AI in 2022

GCRI received a new $200,000 donation to fund work on AI in 2022 from Gordon Irlam. Irlam also made donations in support of our AI project work in 20212020, and 2019. We are grateful for Irlam’s continued support.

New Artificial Intelligence Ethics Paper

Research Associate Andrea Owe and Executive Seth Baum have a paper forthcoming in AI & Society titled From AI for people to AI for the world and the universe. The short paper calls for AI ethics to better account for nonhumans, such as by giving initiatives names like “AI for the World” or “AI for the Universe” instead of “AI for the People”.

Global Catastrophic Risk Remote Talks

On December 9, Executive Director Seth Baum was part of an online panel discussion titled The future of catastrophic risk hosted by the University of Warwick.

On December 17, Executive Director Seth Baum will be doing an “Ask Me Anything” event on the Effective Altruism Forum.

On December 17 and 21, Executive Director Seth Baum will deliver a remote talk titled The challenges of addressing rare events and how to overcome them to the US National Academies of Sciences, Engineering, and Medicine event Anticipating Rare Events of Major Significance.

Ethics Remote Talks

Research Associate Andrea Owe recently gave two remote talks. On November 18, Owe gave a remote talk to the Open University titled Environmentalism in space.

On November 20, Andrea Owe gave another remote talk to the International Conference on AI for People: Sustainable AI (CAIP’21) titled Ethics of sustainability.

Society for Risk Analysis Annual Meeting 2021

Executive Director Seth Baum and 2021 GCRI Fellows recently presented at the 2021 Society for Risk Analysis Conference. They presented three late-breaking posters: Military AI and global catastrophic risk presented by Fellow Uliana Certan and Seth Baum; Moral circle expansion as a means of advancing management of global catastrophic risks presented by Fellows Manon Gouiran, Dakota Norris, and Seth Baum; and Policy attention to extreme catastrophic risk: The curious case of near-earth objects presented by Fellow Aaron Martin and Seth Baum.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.