January Newsletter: GCRI Is Hiring + Call For Advisees/Collaborators

by | 30 January 2020

Dear friends,

I am delighted to announce that GCRI is currently hiring for the position of Junior Research Assistant and Project Manager. This is a great opportunity for someone seeking to make an impact and advance their career in global catastrophic risk. Additionally, GCRI has recently launched a new advising and collaboration program for people at all career points interested in our active AI projects. Please see below for details of both of these items. They are both made possible by a generous recent donation from Gordon Irlam.

We would be most grateful if you could distribute information about these two opportunities to people who may be interested.

Sincerely,
Seth Baum, Executive Director

Job Posting: Junior Research Assistant and Project Manager

GCRI is seeking to hire one or more people for research assistant and project manager roles. We aim to hire one person on a full-time basis or multiple people on a part-time basis. This position would support a range of research and administrative tasks, and would initially be focused on AI projects. The salary would be between $35,000 and $60,000 a year (or the equivalent for part-time work), depending on education, experience, and location. This position is only available to people with a legal right to work in the US. No relocation would be required; the work could be done remotely from anywhere in the US. More information about the position and how to apply is available here.

Call for Advisees and Collaborators on Select AI Projects

GCRI has announced a call for advisees and collaborators on our active AI projects. We welcome inquiries from people interested in seeking our advice and/or collaborating with us on our AI projects. Our AI projects are on collective action, corporate governance, ethics, expert judgment, global strategy, international institutions, national security, R&D programs, and safety practices. Details on the AI projects and how to get involved are available here.

GCR Session at the Society for Risk Analysis Meeting

GCRI hosted a global catastrophic risk session at the 2019 Annual Meeting of the Society for Risk Analysis. The session was chaired by GCRI Director of Research Tony Barrett and included talks by GCRI Executive Director Seth Baum, GCRI Special Advisor for Government Affairs Jared Brown, and GCRI Senior Advisor Gary Ackerman, as well as Arden Rowell of the University of Illinois College of Law and Paul Slovic of Decision Research and the University of Oregon. GCRI Director of Research Tony Barrett also presented a poster at the meeting.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.