May Newsletter: 2021 Advising/Collaboration Program

by | 28 May 2021

Dear friends,

GCRI has just opened a new round of our advising and collaboration program. It is an open call for anyone who would like to connect with us. We are providing advice on career opportunities, research directions, and anything else related to global catastrophic risk. We are also discussing opportunities to collaborate on specific projects, including several active GCRI projects listed online. Whether you are new to the field or an old colleague seeking to reconnect, we welcome your inquiry. For further information, please see here.

Sincerely,
Seth Baum, Executive Director

Remote Talks

GCRI Research Associate Andrea Owe gave a remote talk, “Philosophy and the Ethics of Space Exploration”, to Harvard’s Berkman Klein Center on May 6 as part of the center’s spring 2021 “research sprint” on digital self-determination.

GCRI Executive Director Seth Baum is giving a remote talk, “Setting the Stage for Future AI Governance”, to the Center for Human-Compatible Artificial Intelligence (CHAI) on June 8 as part of the 2021 CHAI Virtual Workshop. The talk will discuss things that can be done today to make AI governance better in the future, like improving governance conditions in general, developing new AI governance concepts, and supporting the growth of the field of AI governance.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.