Summary of the 2022 Advising and Collaboration Program

by | 9 December 2022

In May, GCRI put out an open call for people interested in seeking our advice or collaborating on projects with us. This was a continuation of our successful 2019, 2020, and 2021 Advising and Collaboration Programs. The 2022 Program was made possible by continued support from Gordon Irlam.

The GCRI Advising and Collaboration Program is an opportunity for anyone interested in global catastrophic risk to get more involved in the field. There is practically no barrier to entry in the program: the only thing people need to do is to send us a short email expressing their interest. Participation is flexible to accommodate people’s schedules and needs. The program supports an open and inclusive field of global catastrophic risk and is oriented toward professional development and community building in order to advance work that addresses the risks.

Participants in the 2022 Advising and Collaboration Program have a wide range of backgrounds and interests. Many participants were interested in artificial intelligence (AI), especially AI ethics, or nuclear war risk, particularly as it pertains to the ongoing war in Ukraine. Others had a wide range of interests related to ongoing global catastrophes, including pandemics and biosecurity or nuclear war risk, or were interested in learning about careers in global catastrophic risk. As in previous years, participants also came from many countries around the world and every career point, from undergraduates to senior professionals. We are proud to be able to connect with and learn from such a diverse group of people.

Each year, GCRI collaborates with some Advising and Collaboration participants who express interest in working on and are a good fit for our projects. Last year we collaborated with a large group of collaborators which resulted in the launch of our  2021 GCRI Fellowship Program. The Fellowship Program highlights collaborators who have made significant contributions to the field of global catastrophic risk with GCRI over a calendar year. In 2022, we collaborated with fewer people, but those we did work with made excellent contributions to the field of global catastrophic risk. Our 2022 Fellowship Program features four people who have worked on vastly different projects related to global catastrophic risk including nuclear war and misinformation, public health, and artificial general intelligence.

We thank everyone who has made the 2022 Advising and Collaboration Program, and all of our iterations of the program over the last few years, a success. Thank you to all our wonderful participants, our various funders who made this Program possible, and our colleagues who helped circulate the call for participants.

To support GCRI, please visit our donate page. To learn more about how to get involved with GCRI activities in other ways, please send inquiries to Ms. McKenna Fitzgerald, mckenna [at] gcrinstitute.org.

Some notable 2022 program highlights:

  • Between May 17 and October 15, a total of 73 people responded to our open call blog post; we spoke to 48 of them.
  • Respondents to our open call were based in over 26 countries. Most were based in the United States or the United Kingdom. Other countries where respondents were based include Rwanda, the Philippines, Kenya, Nepal, and Singapore.
  • Respondents ranged from undergraduates to senior professionals and came from a variety of different fields and backgrounds. Many were seeking advice on how to get involved in the field of global catastrophic risk, particularly as it relates to AI ethics, AI policy, or nuclear war risk.
  • Respondents expressed interest various topics including funded AI projects. Of our respondents, 17 expressed interest in learning more about global catastrophic risk generally, 11 expressed interest in AI ethics, 10 expressed interest in AI policy, 7 expressed interest in AI governance, 7 expressed interest in biosecurity and pandemic preparedness, 5 expressed interest in AI scenarios, and 5 expressed interest in nuclear security. We also received inquiries about additional topics including AI in Africa, international relations, climate change, risk prioritization, asteroids, and aftermath of global catastrophes.
  • We held a total of 54 one-on-one video and phone advising calls.
  • We made 22 private introductions connecting program participants with each other and with other people in our networks.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.