Open Call for Advisees and Collaborators, May 2021

by | 12 May 2021

UPDATE: The open call for advisees and collaborators is now closed. Thank you to everyone who applied. However, anyone interested in seeking our advice and/or collaborating with us is still welcome to contact us as per the instructions below and we will include them in our next advisees and collaborators program.


GCRI is currently welcoming inquiries from people who are interested in seeking our advice and/or collaborating with us. Inquiries can concern any aspect of global catastrophic risk. We welcome inquiries from people at any career point, including students, any academic or professional background, and any place in the world. People from underrepresented groups are especially encouraged to reach out.

We are especially interested in inquiries from people whose interests overlap with ours. For details of our interests, please see our publicationstopics, and our current funded AI policy projects. We are most interested in fielding inquiries on the select projects outlined below. That said, we want to stress that this is an open call, and we encourage people to reach out to us if they are interested in any aspect of global catastrophic risk.

We also welcome inquiries from both colleagues we already know and people we have not met before. This open call is a chance for us to catch up with the people we already know as well as a chance to start a new relationship with the people we have not met before. It is also chance for anyone to talk with us about how to advance their career in global catastrophic risk, to explore potential synergies with our work, and to expand their networks in the global catastrophic risk community.

Participation does not necessarily entail any significant time commitment. It can consist of anything from a short email exchange to more extensive project work. In some cases, people may be able to get more involved by contributing to ongoing dialog, collaborating on research and outreach activities, and co-authoring publications. Some funding is available for people who collaborate with us on project work. We are most likely to fund work on the projects listed below, though there is no strict restriction on the scope of the projects we can fund. Details are available upon request.

This is the third time that GCRI has hosted an advising and collaboration program. Prior programs are documented here.

Individuals interested in speaking with us or collaborating with us should email Ms. McKenna Fitzgerald, mckenna [at] gcrinstitute.org. Please include a short description of your background and interests, what you hope to get out of their interaction with GCRI, a resume/CV or a link to your professional website, where you are based, and how you heard about the program. It would also be helpful to include your name in the subject line of the email and any ideas for how you could contribute to GCRI’s projects in the body. 

The projects:

AI policy

We are doing a mix of research and outreach on AI policy. The research studies what policies could affect global catastrophic risk, including by affecting outcomes related to long-term AI. The outreach seeks to influence ongoing AI policymaking in the US and internationally. This work is being done in close collaboration with colleagues at other organizations interested in AI and global catastrophic risk. We welcome inquiries from people interested in these topics, especially those not already coordinating with others in the global catastrophic risk community on how to approach these issues.

Expert judgment on long-term AI

We are applying best practices in expert elicitation methodology to the study of long-term AI. Specifically, we are evaluating prior use of expert judgment on long-term AI and developing recommendations for future research. We welcome inquiries from people with background in expert elicitation methodology and people who are involved in research using expert judgment on long-term AI.

Forecasting global catastrophic risk

We are studying potential ways to forecast global catastrophic risk accurately. We are particularly interested in extending and adapting the methods associated with the Good Judgement Project to forecasting rare and unprecedented events like global catastrophes. We welcome inquiries from anyone interested forecasting global catastrophic risk.

Improving China-West relations

We are at the early stages of studying China-West relations as they relate to global catastrophic risk. Specifically, we are studying whether there are opportunities to improve relations that could significantly reduce global catastrophic risk. Our work is largely motivated by concerns about the potential danger of competition to develop advanced AI, but we recognize that improved relations could mitigate a range of global catastrophic risks. We are particularly interested connecting with people who have a background in China-West relations and people from China with a perspective on these issues.

National security dimensions of AI

We are doing work on the relationship between AI and nuclear weapons and on other military applications of AI, including autonomous weapons. We are particularly interested in connecting with people with up-to-date knowledge of the academic literature in and the policy dialog related to military AI.

Near-Earth objects and nuclear war

We are interested in the risk an explosion caused by a near-Earth object (an asteroid, comet, or meteor) colliding with Earth could be misinterpreted as a nuclear explosion and triggering a nuclear war. We have explored this topic in prior research (this and this), including through a preliminary risk model. We are interested in refining the risk analysis of this topic and further developing its implications. We are interested in the assistance of a student or early career researcher, ideally with background or at least interest in both NEOs and risk analysis, with our research on this.

Nonhumans in AI ethics

We are studying how concern for nonhumans should be incorporated in the design and use of AI systems. Our project involves fundamental topics in moral philosophy, the development and application of institutional ethics principles in governance, and the designing techniques such as inverse reinforcement learning to handle nonhumans appropriately. We are considering both “natural” nonhumans, such as nonhuman animals and ecosystems, and “artificial” nonhumans, such as AI systems themselves. We are part of a small community of ethicists and computer scientists working on this topic, and we welcome inquiries from anyone who wishes to join.

Nonhumans in global catastrophic risk

We are studying alternative conceptions of global catastrophic risk that include catastrophes affecting nonhumans. Global catastrophic risk has traditionally been defined in terms of the impact of catastrophes on humans, but arguably the impact of catastrophes on nonhumans is equally important. This is an early stage project, and we welcome inquiries from anyone with interest in the topic.

Transfer of safety techniques to AI projects

We are studying how to ensure important safety techniques developed by external groups are implemented in AI projects. Our initial work has explored both the AI governance and safety science literatures, but we have so far found very little relevant research. We welcome any suggestions, ideas, or offers of assistance.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.