January Newsletter: Vienna Conference on Nuclear Weapons

by | 11 January 2015

Dear friends,

In December, I had the honor of speaking at the Vienna Conference on the Humanitarian Impact of Nuclear Weapons, hosted by the Austrian Foreign Ministry in the lavish Hofburg Palace. The audience was 1,000 people representing 158 national governments plus leading nuclear weapons NGOs, experts, and members of the media.

My talk “What is the risk of nuclear war?” presented core themes from the risk analysis of nuclear war. I explained that each of us is, on average, more likely to die from nuclear war than from car crashes. I also stressed that the risk increases over time: the longer we wait, the more likely a nuclear war is to occur. The intent was to give the audience a sense of urgency on this important issue. The Chair’s Summary and Austria Pledge indicate that this message was heard.

My talk represents a big part of what I believe GCRI aspires for: open discussion of the major global catastrophic risks, grounded on the best research and with key stakeholders and decision makers. GCRI is just a few years old, led by early-career researchers and supported by a rather small budget. For us to have this scale of impact already speaks to both the quality of work we’re doing and the significant demand that exists for it. If you would like to support our work, please visit our donate page or contact me directly.

Available online is my talk text, slides, and video (YouTube, beginning at 49:40).

As always, thank you for your interest in our work. We welcome any comments, questions, and criticisms you may have.

Sincerely,
Seth Baum, Executive Director

GCR News Summaries

Robert de Neufville’s latest news summaries are available here: GCR News Summary December 2014; GCR News Summary November 2014. As always, these summarize recent events across the breadth of GCR topics.

New Futures Special Issue Papers

The first two papers to be published in the Futures special issue “Confronting future catastrophic threats to humanity” co-edited by Seth Baum of GCRI and Bruce Tonn of University of Tennessee are now online:

How much could refuges help us recover from a global catastrophe? (subscription/paywall) by Nick Beckstead of GiveWell. This article argues that refuges would not be a cost-effective means of reducing global catastrophic risk.

Feeding everyone: Solving the food crisis in event of global catastrophes that kill crops or obscure the sun by David Denkenberger of GCRI and Joshua Pearce of Michigan Technological University. This article further develops Denkenberger and Pearce’s work on alternative foods for surviving global catastrophes as in their new book Feeding Everyone No Matter What.

Feeding Everyone Publications & Media Coverage

In addition to their article in Futures, Denkenberger and Pearce have also published two short summaries of their new book Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe, both via the publisher Elsevier. 10 ways to feed ourselves after a global agricultural collapse has been published at Elsevier Connect. Life after global catastrophe: How do we feed everyone? has been published at Elsevier SciTechConnect.

The book has also been getting extensive media attention since its publication in November, including Discovery News, Gizmodo, Phys.org, and Science Daily. Full coverage is documented here.

Other Publications

Seth Baum has published an article The great downside dilemma for risky emerging technologies in the journal Physica Scripta. The article discusses whether to develop technologies that promise great benefits to humanity but come with a risk of global catastrophe, including geoengineering and artificial intelligence.

Seth Baum has published a review of the recent film Snowpiercer at the Journal of Sustainability Education. The review discusses sustainability, resource management, and geoengineering themes in the film.

Seth Baum has a new column up at the Bulletin of the Atomic Scientists: Nuclear war, the black swan we can never see. This discusses nuclear war risk and why we cannot assume there is no risk just because major nuclear war has never happened before.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.