December Newsletter: A Focus on Solutions

by | 15 December 2015

Dear friends,

This holiday season, please consider supporting the Global Catastrophic Risk Institute. You can donate online or contact me for further information. At this time, GCRI’s success is limited mainly by its available funding. And nothing beats giving the gift of protection from global catastrophe.

In my view, what’s ultimately important is not the risks themselves but the actions we can take to reduce them. A risk could be very large, but if we can’t do anything about it, then we should focus on something else. Yet people often focus on the risks. A classic example is An Inconvenient Truth, which basically spent an hour and a half saying “climate change is bad”. Similarly, people frequently ask me what the biggest risks are, when they should be asking for the best ways that they and others can reduce the risks.

With that in mind, I am delighted to announce Confronting Future Catastrophic Threats to Humanity, a special issue of the journal Futures co-edited by Bruce Tonn and myself. (Click here for the no-cost preprints.) This collection of research papers analyzes some of the actions that can be taken to reduce the risk of global catastrophe. It is also an effort to promote solution-oriented research across the global catastrophic risk community. While this is not the only study of global catastrophic risk solutions, it is the largest dedicated study to date.

As always, thank you for your interest in our work. We welcome any comments, questions, and criticisms you may have.

Sincerely,
Seth Baum, Executive Director

GCR News Summaries

Here are Robert de Neufville’s monthly news summaries for September, October, and November. As always, these summarize recent events across the breadth of GCR topics.

Barrett to Lead Security and Defense Group

Tony Barrett has been elected Chair of the Security and Defense Specialty Group of the Society for Risk Analysis. His term will begin December 2016. The SDSG group convenes experts on security and defense risks from across academia, government, and other sectors. Barrett is an established security and defense risk expert including publications on nuclear war risk, terrorism, and other topics.

Call For Papers: Robotic and AI Safety and Security

Roman Yampolskiy is Lead Guest Editor of a new special in development Robotic and AI safety and security in the Journal of Robotics. Contributions are sought across a wide range of topics in risks and safety measures associated with robotics and AI.

Confronting Future Catastrophic Threats to Humanity

The special issue Confronting Future Catastrophic Threats to Humanity has been published in the journal Futures. (No-cost preprints available here.) The issue is co-edited by Seth Baum and Bruce Tonn. It features ten articles plus a full introduction on topics including artificial intelligence, nuclear war, quantum computing, and measures for surviving global catastrophes.

New Popular Articles

Kaitlin Butler has published her first article for GCRI, Utah’s hopes for oil shale bonanza has a public relations problem, industry symposium hears, an in-depth look at an oil resource that is reportedly three times larger than Saudi Arabia’s. The article is published in DeSmogBlog.

Seth Baum has two articles arguing for greater usage of nuclear power: Japan should restart more nuclear power plants, published in the Bulletin of the Atomic Scientists, and Antinuclear Austria should lead the way on nuclear power, published in Scientific American Blogs.

Global Risk and Opportunity Report

GCRI contributed content to a new Global Risk and Opportunity Report, a news portal website from the Global Challenges Foundation. The site covers the full breadth of current events related to the global catastrophic risks, with emphasis on the actions people are taking around the world to address the risks.

Jobs at Oxford Future of Humanity Institute

The Future of Humanity Institute at Oxford University are some of GCRI’s closest colleagues. They conduct excellent research on global catastrophic risk, with emphasis on risks from artificial intelligence. They are currently recruiting for several jobs on AI strategy, policy, and ethics, and related topics. Please click here for more information.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.