November Newsletter: Year in Review

by | 23 November 2021

Dear friends,

2021 has been a year of overcoming challenges, of making the most of it under difficult circumstances. The Delta variant dashed hopes for a smooth recovery from the COVID-19 pandemic. Outbreaks surge even in places with high vaccination rates, raising questions of when or even if the pandemic will ever end. As we at GCRI are abundantly aware, it could be a lot worse. But it has still been bad, and we send our condolences to those who have lost loved ones.

Despite the circumstances, GCRI has actually managed to have a good year. In a new blog post, we review GCRI’s 2021 accomplishments, our plans for 2022, and the fundraising that would help us achieve these plans. As the post explains, we have produced more work addressing global catastrophic risk than previous years through our research, outreach, and community support activities. We have benefited from a growing team, including new Research Associate Andrea Owe, and from a wealth of external collaborators, in part thanks to our successful Advising and Collaboration Program. This provides us an excellent foundation for our work in 2022 and beyond.

Your support can help GCRI address global catastrophic risk. Please consider GCRI in your donations.

We will enter the new year with a new team. Robert de Neufville is leaving GCRI in December to pursue other activities. Robert is one the longest-tenured members of the GCRI team, having been involved since 2013 and having held the position of Director of Communications since 2016. He will be greatly missed, and we wish him the best in his future endeavors. Additionally, McKenna Fitzgerald has been promoted to the position of Deputy Director. McKenna joined GCRI last year as Project Manager and Research Assistant. She has been an invaluable contributor, and we are confident she will excel in her new role.

Sincerely,
Seth Baum
Executive Director

2021 Advising and Collaboration Program

In May, GCRI put out an open call for people interested in seeking our advice or in collaborating on projects with us. The open call was a continuation of the Advising and Collaboration Programs GCRI conducted in 2019 and 2020. Our 2021 Advising and Collaboration Program proved to be our most successful iteration of the program, with a bigger, more diverse group of participants than ever before. Some of the participants in the program offered testimonials about their experience here.

New GCRI Fellowship Program

GCRI is pleased to announce its new GCRI Fellowship Program. The program recognizes a select group of twelve GCRI Fellows who made exceptional contributions to addressing global catastrophic risk in collaboration with GCRI in 2021. The inaugural class of GCRI Fellows range from undergraduates to senior professionals, and made contributions across a range of disciplines, policy outreach, program development, and other activities.

New Papers on Artificial Intelligence

GCRI Executive Director Seth Baum and GCRI Research Associate Andrea Owe have a new paper forthcoming in Ethics, Policy, & Environment titled Artificial intelligence needs environmental ethics. The paper calls for a greater role for environmental ethics perspectives to work on AI issues.

GCRI Executive Director Seth Baum is also a co-author of new paper in Technology in Society led by Victor Galaz, Deputy Director of the Stockholm Resilience Centre, in conjunction with the Global Systemic Risk Group at Princeton. The paper, titled Artificial intelligence, systemic risks, and sustainability, looks at the role AI plays in the related domains of systemic risks and environmental sustainability.

Society for Risk Analysis Meeting Presentations

Three GCRI collaborators who connected with GCRI through its 2021 Advising and Collaboration Program will be giving presentations on research done in collaboration with GCRI at the annual meeting of the Society for Risk Analysis (SRA) December 5-9. GCRI has participated in the SRA’s annual meeting in most years since 2010, as detailed here.

EA Giving Tuesday Event on Facebook

EA Giving Tuesday is collaborating with Rethink Charity to raise money for effective organizations like GCRI through Facebook’s Giving Tuesday event on November 30. Facebook will match up to $8 million in donations to organizations like ours, but the matching funds will go quickly so if you would like to get your donations to GCRI matched, EA Giving Tuesday instructions are available here. The page for donating to GCRI on Facebook is here.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.