December Newsletter: Year in Review

by | 27 December 2017

Dear friends,

It has been another productive year for GCRI. Though we have a limited budget, we’ve made major contributions to global catastrophic risk research. Here are some highlights:

* GCRI hosted its largest-ever series of symposia on global catastrophic risk at the 2017 Society for Risk Analysis (SRA) conference, prompting SRA to encourage us to lead the formation of an official global catastrophic risk group within SRA.

* GCRI affiliates presented at numerous other events throughout the year, including dedicated catastrophic risk events at UCLA and Gothenburg.

* GCRI Executive Director Seth Baum published a major report, “A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy”, and a paper “Reconciliation Between Factions Focused on Near-Term and Long-Term Artificial Intelligence”, in the journal AI & Society.

* GCRI Director of Research Tony Barrett published two papers on core GCRI ideas: “Value of GCR Information: Cost Effectiveness-Based Approach for Global Catastrophic Risk (GCR) Reduction” in the journal Decision Analysis, and “Towards an Integrated Assessment of Global Catastrophic Risk” (with Seth Baum) in the UCLA conference proceedings.

* GCRI Associate Dave Denkenberger launched a spinoff group Alliance to Feed the Earth in Disasters (ALLFED) to further his work on food catastrophes.

* GCRI Associate Jacob Haqq-Misra is guest-editing a special issue of Futures on the detectability of the future Earth and of terraformed worlds.

* GCRI Associate Roman Yampolskiy co-edited Technological Singularity: Managing the Journey and is guest-editing (with GCRI Junior Associate Matthijs Maas and others) a special issue of Informatica on superintelligence.

We are currently fundraising to continue this work in 2018. A recent survey of organizations working on AI risk found that GCRI is one of the most cost-effective organizations working on AI risk. Please consider donating to GCRI online or contact me for additional support opportunities.

Sincerely,
Seth Baum, Executive Director

Artificial Intelligence

GCRI Executive Director Seth Baum spoke at a Tech2025 Think Tank event titled “What is AI and How Can We Keep It from Harming Humanity?” in Brooklyn, NY on November 16. The event was covered in an article in The Ink.nyc on “Debating the Potential Dangers of Artificial Intelligence “.

GCRI Associate Roman Yampolskiy gave a talk on “Artificial Intelligence Safety” at Jagiellonian University in Krakow, Poland on November 13. He gave another talk titled “Future of Money, Contracts and Negotiation in the Age of Intelligent Machines” at The Open Eyes Economy Summit in Krakow, Poland on November 14.

GCRI Associate Dave Denkenberger has a chapter he wrote with Alexey Turchin on “Military AI as a Convergent Goal of Self-Improving AI” coming out in GCRI Associate Roman Yampolskiy’s edited volume, AI Safety and Security (forthcoming in 2018).

Food Security

GCRI Associate Dave Denkenberger has a paper with Joshua Pearce titled “Cost-Effectiveness of Interventions for Alternate Food in the United States to Address Agricultural Catastrophes” coming out in International Journal of Disaster Risk Reduction.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.