December Newsletter: The US Election & Global Catastrophic Risk

by | 10 December 2016

Dear friends,

The recent US election offers a vivid reminder of how large and seemingly unlikely events can and do sometimes occur. Just as we cannot assume that elections will continue to be won by normal politicians, we also cannot assume that humanity will continue to avoid global catastrophe.

The outcome of this election has many implications for global catastrophic risk, which I outline in a new article in the Bulletin of the Atomic Scientists. To my eyes, the election increases the importance of nuclear weapons risk relative to other risks. It also draws attention to two major political issues: the possible decline of democracy in the US and other Western countries and the possible loss of the post-WWII international order. Political science should play a greater role in the study of global catastrophic risk.

GCRI will continue to monitor these dynamics closely. Indeed, we are well set up for it, given our strong backgrounds in policy, political science, and other social sciences, as well as the full range of the global catastrophic risks. Above all, we seek to clarify what the political trends and events mean for global catastrophic risk and for the opportunities that each of us have to reduce the risk.

This holiday season, please consider donating to GCRI to support our work to study and reduce global catastrophic risk. Your tax-deductible contribution helps keep human civilization intact.

Sincerely,
Seth Baum, Executive Director

Popular Media

Jacob Haqq-Misra reviewed Olle Häggström’s Here Be Dragons: Science, Technology and the Future of Humanity for Law, Innovation and Technology.

David Denkenberger was interviewed in Davos, Switzerland for German public broadcasting station Deutschlandfunk (audio in German).

Seth Baum published an article What Trump means for global catastrophic risk in the Bulletin of the Atomic Scientists.

Baum was also featured in two-part series on mass extinction on The Adventures of Memento Mori podcast. Part 1 is available here and part 2 is available here.

Upcoming Events

GCRI Director of Research Tony Barrett will host and speak at a symposium on “Current and Future Global Catastrophic Risks” on December 14 as part of the Society for Risk Analysis (SRA) Annual Meeting. SRA is the premier academic and professional society for risk analysis. GCRI has led symposiums at SRA since 2010. The 2016 GCRI symposium features five talks focused on risks from AI and nuclear weapons.

Past Events

GCRI Associate Roman Yampolskiy gave two talks in Lisbon, Portugal: a talk on “The Dividing Line Between Humans and Machines” at Web Summit on November 8, 2016 and a talk on “Risks of Artificial Superintelligence” at the Champalimaud Centre for the Unknown on November 9, 2016.

Roman Yampolskiy and Seth Baum both presented at the Envision Conference at Princeton University, an event for undergraduate students and early-career professionals in technology fields.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.