Open Call for Advisees and Collaborators, May 2022

by | 17 May 2022

UPDATE: The open call for advisees and collaborators is now closed. Thank you to everyone who applied. However, anyone interested in seeking our advice and/or collaborating with us is still welcome to contact us as per the instructions below and we will include them in our next advisees and collaborators program.


GCRI is currently welcoming inquiries from people who are interested in seeking our advice and/or collaborating with us as part of our fourth annual Advising and Collaboration Program. Inquiries may cover any aspect of global catastrophic risk. We welcome inquiries from people at any career point, including students, any academic or professional background, and any place in the world. People from underrepresented groups are especially encouraged to reach out.

We are especially interested in inquiries from people whose interests overlap with ours. For details of our interests, please see our publications, topics, and our current funded AI policy projects. We encourage people to reach out to us if they are interested in any aspect of global catastrophic risk.

We welcome inquiries from both colleagues we already know and people we have not met before. This open call is a chance for us to catch up with the people we already know as well as a chance to start a new relationship with the people we have not met before. It is also a chance for anyone to talk with us about how to advance their career in global catastrophic risk, to explore potential synergies with our work, and to expand their networks in the global catastrophic risk community. We encourage new participants to read GCRI Executive Director Seth Baum’s Common Points of Advice for Students and Early-Career Professionals for themes frequently discussed in previous programs.

Participation does not necessarily entail any significant time commitment. It can consist of anything from a short email exchange to more extensive project work. In some cases, people may be able to get more involved by contributing to ongoing dialog, collaborating on research and outreach activities, co-authoring publications, or becoming GCRI Fellows. For examples of different types of participation, please read testimonials from the 2021 Program. Some funding is available for people who collaborate with us on project work. Details are available upon request.

Individuals interested in speaking with us or collaborating with us should email Ms. McKenna Fitzgerald, mckenna [at] gcrinstitute.org. Please include a short description of your background and interests, what you hope to get out of their interaction with GCRI, a resume/CV or a link to your professional website, where you are based, and how you heard about the program. It would also be helpful to include your name in the subject line of the email and any ideas for how you could contribute to GCRI’s projects in the body. There is no deadline for submission, and we anticipate keeping the program open through the summer unless otherwise specified.

For more information on ways to participate in GCRI activities, please view our Get Involved page.

We look forward to hearing from you.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.