September Newsletter: AI, Nuclear War, and News Projects

by | 15 September 2015

Dear friends,

I’m delighted to announce three new funded projects. Two of them are for risk modeling, on artificial intelligence and nuclear war. These follow directly from our established nuclear war and emerging technologies research projects. The third is for covering current events across the breadth of global catastrophic risk topics. This follows directly from our news summaries. It is an honor to be recognized for our work and to have the opportunity to expand it. Please stay tuned as these projects unfold.

As always, thank you for your interest in our work. We welcome any comments, questions, and criticisms you may have.

Sincerely,
Seth Baum, Executive Director

GCR News Summaries

Here are Robert de Neufville’s monthly news summaries for June, July, and August. As always, these summarize recent events across the breadth of GCR topics.

Society for Risk Analysis 2015 Annual Meeting

The Society for Risk Analysis is the leading academic and professional society for all aspects of risk. GCRI hosts sessions on global catastrophic risk each year, bringing together leading experts in the field. This year, GCRI is hosting two sessions, one on armed conflict (which includes local- and global-scale conflict) and one on global catastrophic risks in general. The conference is 6-10 December in Arlington, VA and GCRI’s sessions are on the 7th and 8th. Session details are available here.

New Grant: Artificial Intelligence

GCRI has received a grant on artificial intelligence risk from a grant competition hosted by the Future of Life Institute with funding from Elon Musk and the Open Philanthropy Project. The project team includes Tony Barrett, Roman Yampolskiy, and Seth Baum. The project title is “Evaluation of Safe Development Pathways for Artificial Superintelligence”. Details are available here.

New Grant: Nuclear War

GCRI has received a grant on nuclear war risk from the Global Challenges Foundation. The grant is for modeling of the probability and severity of a range of nuclear war scenarios. Details are available here.

New Grant: GCR Current Events

GCRI has received a grant on GCR current events funded by the Global Challenges Foundation. GCRI will track current events across the breadth of global catastrophic risk topics. Details are available here.

New Paper: AI Risk

The first paper has been published from GCRI’s new line of research on artificial intelligence risk. The paper is Risk analysis and risk management for the artificial superintelligence research and development process, authored by Tony Barrett and Seth Baum.

New Science Article: Biological Weapons

GCRI Associate Gary Ackerman, together with colleagues Crystal Boddie, Matthew Watson, and Gigi Kwik Gronvall, published a paper Assessing the bioweapons threat in the journal Science. The paper presents a Delphi survey of 62 leading experts on the likelihood of a large-scale biological attack within the next 10 years and the likelihood of actionable intelligence about the attack.

Symposium: Winter-Safe Deterrence

The journal Contemporary Security Policy has published a symposium on Seth Baum’s paper Winter-safe deterrence: The risk of nuclear winter and its challenge to deterrence. The symposium features contributions from international security experts Aaron Karp & Regina Karp, Christian Enemark, Jean Pascal Zanders, and Patricia Lewis, as well as a reply by Baum. Symposium details are available here.

New Popular Articles

Seth Baum and Trevor White have an article When robots kill published in The Guardian’s Political Science blog. This discusses AI risks from driverless cars to superintelligence.

Seth Baum has two new articles in the Bulletin of the Atomic Scientists:

A picture’s power to prevent, on the significance of the 70th anniversary of the Hiroshima and Nagasaki bombings.

Breaking down the risk of nuclear deterrence failure, on the risk of major war with vs. without nuclear weapons as this relates to the decision of whether to rapidly disarm nuclear weapons.

Sneak Preview: Futures Special Issue

Confronting Future Catastrophic Threats to Humanity will be a special issue of the journal Futures co-edited by Seth Baum of GCRI and Bruce Tonn of University of Tennessee. The issue is now in production and a ‘sneak preview’ is online here with preprints for most articles.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.