November Newsletter: A Year of Growth

by | 26 November 2019

Dear friends,

2019 has been a year of growth for GCRI. One year ago, we described a turning point for the organization and announced our goal of scaling up to increase our impact on global catastrophic risk. Over the past year, we have made considerable progress toward this goal. We have expanded our team, published work in top journals such as Science and Risk Analysis, and hosted a tremendously successful advising and collaboration program in support of talented people around the world. All this and more is detailed in our new blog post, Summary of 2019-2020 GCRI Accomplishments, Plans, and Fundraising.

We are currently seeking to raise up to $1.5 million so that we can continue to scale up. We believe this would enable us to maximize our potential to reduce global catastrophic risk. Anyone interested in contributing can do so via our donate page or by contacting me directly.

Sincerely,
Seth Baum, Executive Director

Artificial Intelligence

GCRI’s Seth Baum, Robert de Neufville, and Tony Barrett have a new paper with GCRI Senior Advisor Gary Ackerman titled “Lessons for Artificial Intelligence from Other Global Risks”. The paper draws important lessons for the study of AI risk from the study of four other risks: biotechnology, nuclear weapons, global warming, and asteroids. The paper will be published in a new CRC Press collection edited by Maurizio Tinnirello titled The Global Politics of Artificial Intelligence.

GCRI’s Advising and Collaboration Program

In May, GCRI put out an open call for people interested in seeking our advice or collaborating with us on projects. We received inquiries from talented people from more than 20 different countries around the world. In the ensuing conversations, we provided guidance to a lot of people who are relatively new to the field of global catastrophic risk. In turn, many of them gave us valuable input on our other work or made valuable contributions to it. We summarize how the program turned out in greater detail here. Jia Yuan Loke, a research associate at the Centre for AI and Data Governance in Singapore who ended up collaborating with us on a conference paper that is currently under review, described his experience with our advising and collaboration program here.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.