Summary of the 2023 Advising and Collaboration Program

by | 5 December 2023

In August, GCRI put out an open call for people interested in seeking our advice or collaborating on projects with us. This was a continuation of our successful 2019, 2020, 2021, and 2022 Advising and Collaboration Programs. The 2023 Program was made possible by continued support from Gordon Irlam.

The GCRI Advising and Collaboration Program is an opportunity for anyone interested in global catastrophic risk to get more involved in the field. There is practically no barrier to entry in the program: the only thing people need to do is to send us a short email expressing their interest. Participation is flexible to accommodate people’s schedules and needs. The program supports an open and inclusive field of global catastrophic risk and is oriented toward professional development and community building in order to advance work that addresses the risks.

The 2023 Advising and Collaboration Program had an emphasis on supporting people pursuing careers in AI governance. About half of the participants in this year’s program had this focus. AI is currently a hot topic in the policy world, thanks in part to the recent prominence of large language models. GCRI has been involved in AI governance research for many years. We are happy to draw on our experience in AI governance to support people starting out in this important field.

Other participants have a wide range of backgrounds and interests, including biosecurity, international relations, resilience in the aftermath of global catastrophes, the study of extremists who may seek to cause global catastrophe, and legal issues posed by global catastrophic risk. We aim for the Advising and Collaboration Program to be able to support people from any background seeking to get involved in global catastrophic risk; this year’s program has done this.

We thank everyone who has made the 2023 Advising and Collaboration Program, and all of our iterations of the program over the last few years, a success.

Some notable 2023 program highlights:
• Between August 14 and October 10, a total of 44 people responded to our open call blog post. An additional 6 people reached out separately from the blog post. Of these 50 people, we spoke to 43 of them.
• Respondents to our open call were based in 15 countries. About half were based in the United States or the United Kingdom. Other countries where respondents were based include China, Finland, France, Germany, Hungary, India, Italy, Kenya, Nigeria, the Philippines, Spain, Switzerland, and Vietnam.
• We held a total of 44 one-on-one advising calls and 3 group calls for networking and discussion purposes.
• We made 54 private introductions connecting program participants with each other and with other people in our networks.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.