Summary of January-July 2020 Advising and Collaboration Program

by | 30 July 2020

In January, GCRI put out an open call for people interested in seeking our advice or collaborating on projects with us. This was a continuation of last year’s successful advising and collaboration program. We anticipate conducting a second round of the program later in 2020. The 2020 programs are made possible by generous support from Gordon Irlam.

This first 2020 program focused on a number of AI projects that are also supported by Irlam. Program participants were mostly people interested in AI risk, ethics, and policy. We got to connect with a lot of very talented people, some of whom made significant contributions to our ongoing AI project work. Through our advising and collaboration program, we were able to build the community of people working on AI and global catastrophic risk while also advancing our in-house projects. One might think that community building and in-house project work would be at odds with each other since the time we have to do each is limited, but our experience shows that there can be a synergy between the two.

We were able to collaborate more extensively with program participants this year in part because we had more funding available. As a result, we were able to offer a small amount of funding to select program participants who made substantial contributions to our AI project work. These participants have done and continue to do excellent work to advance our projects. We regret that we had only a limited amount of funding available and are grateful to program participants who contributed to our AI projects on a volunteer basis. Overall, the program proved to be a productive means of connecting with talented collaborators. We hope that we will be able to raise enough money through our ongoing fundraising efforts to offer more funding to participants in future programs. To support GCRI, please visit our donate page.

While the ongoing pandemic initially disrupted our advising and collaboration program, we were ultimately able to conduct it more or less as planned. Because the program was designed to be run remotely and on a flexible schedule, it was a relatively easy for us to adjust to the pandemic. Although interacting remotely with people is not the same as meeting them in person, this flexibility is a major advantage of remote interaction. Whereas community activities like conferences were canceled or postponed indefinitely, we were able to continue working with the participants in spite of the pandemic.

Some notable program highlights:

  • Between January and July 60 people responded to our open call blog post; we spoke to 36 of them. We spoke with another 9 people as part of the EAGx Virtual Conference.
  • Respondents to our open call were based in over 17 countries. Most were based in either North America or Europe. Other countries where respondents were based include Singapore, Australia, Israel, and Colombia.
  • Respondents ranged from undergraduates to senior professionals and came from a variety of different fields and backgrounds. Many were seeking advice on how to get involved in the field of global catastrophic risk, particularly as it relates to AI and policy work.
  • Respondents expressed interest in the full range of our active AI projects. Of our respondents, 19 expressed interest in our AI corporate governance project, 14 in our AGI survey project, 14 in our international institutions project, 10 in our national security project, 8 in our safety transfer project, and 5 in each of our other projects (collective action, ethics, and expert judgment).
  • We held a total of 41 one-on-one video and phone advising calls.
  • We made 13 private introductions connecting program participants with each other and with other people in our networks. 

15 October 2020: The post has been edited to clarify that we spoke with 36 of of the people who responded to our open call blog post.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.