Open Call for Advisees and Collaborators, August 2023

by | 2 August 2023

UPDATE: The open call for advisees and collaborators is now closed. Thank you to everyone who applied. However, anyone interested in seeking our advice and/or collaborating with us is still welcome to contact us as per the instructions below and we will include them in our next advisees and collaborators program.


GCRI is currently welcoming inquiries from people who are interested in seeking our advice and/or collaborating with us as part of our fifth annual Advising and Collaboration Program. Inquiries may cover any aspect of global catastrophic risk. We welcome inquiries from people with any academic or professional background, from any place in the world, and at any career point, including students.

Participation does not necessarily entail any significant time commitment. It can consist of anything from a short email exchange to more extensive project work. This year, we intend to emphasize group conversations to facilitate dialog and networking among people with synergistic backgrounds and interests.

This year, we are especially interested in the following themes:

Diversity and inclusion: Improving the success of people from underrepresented demographic groups within the field of global catastrophic risk. Demographics can include race, gender, geographic location, and more. We seek to support people from underrepresented groups as they advance their careers in the field. We additionally seek to connect with anyone who would like to learn more about how to help and potentially partner with us in advancing diversity and inclusion with the field of global catastrophic risk.

AI governance: Due to recent advances in AI technology and surrounding conversations, opportunities for AI governance have expanded. AI governance includes public policy, corporate governance, and more. However, there remains a significant learning curve to contributing. We seek to support people who wish to orient their careers toward AI governance, as well as people already in AI governance who wish to improve their capabilities. We welcome people with backgrounds in governance as well as people with backgrounds in computer science.

AI politics: Ongoing conversations about AI include heated debate about the extent to which AI poses a catastrophic risk and the appropriateness of discussing catastrophic AI risk in the first place. There exists some intense opposition to the project of understanding and reducing catastrophic AI risk. We seek to support people who need to navigate this political situation in order to advance AI risk reduction and related objectives.

Public scholarship: There is an acute need for more and better public conversation about global catastrophic risks, especially to advance solutions for reducing the risk. We seek to grow a community of people active in producing high-quality public scholarship on global catastrophic risk. We welcome people with backgrounds in global catastrophic risk as well as people with backgrounds in public media, including both traditional media and new/digital media and including text, audio, and video formats.

Students and early-career professionals: Global catastrophic risk is a challenging and multifaceted topic. People who are just starting out are often unsure about how they might fit in or how they can best pursue a career in the field. They can also benefit from guidance from more senior people in the field as well as networking opportunities. We welcome people with any background and interests who wish to learn more and advance their studies and careers.

Individuals interested in participating should email Dr. Seth Baum, seth [at] gcrinstitute.org. Please include a short description of your background and interests, what you hope to get out of your interaction with GCRI, which of the above theme(s) (if any) you are interested in, a resume/CV or a link to your professional website (or similar such as LinkedIn), where you are based, and how you heard about the program. There is no deadline for submission, and we anticipate keeping the program open for several months unless otherwise specified.

For more information on ways to participate in GCRI activities, please view our Get Involved page.

We look forward to hearing from you.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.