March-April 2014 Newsletter

by | 23 April 2014

Dear friends,

I am pleased to announce the call for papers for a new special issue of the journal Futures titled Confronting Future Catastrophic Threats To Humanity. I will be co-editing this together with my colleague Bruce Tonn of the University of Tennessee. Over the years, Futures has probably published more research on global catastrophic risk than any other journal, including a special issue on human extinction that Tonn co-edited in 2009. It is an honor to be working with Tonn and Futures on a new special issue. This new issue has a more practical bent in its focus on how the threats can be confronted. I believe that more attention is needed on the positive, productive steps that people can take to reduce global catastrophic risk. This special issue is an effort in that direction.

Prospective authors should contact Tonn and/or myself with their paper ideas. We welcome contributions from all disciplines, including people without formal background in futures studies. We look forward to hearing from you.

As always, thank you for your interest in GCRI. We welcome any comments, questions, and criticisms you may have.

Sincerely,
Seth Baum, Executive Director

GCR News Summaries

Robert de Neufville’s latest news summaries are available here: GCR News Summary February 2014 and GCR News Summary March 2014. As always, these summarize recent events across the breadth of GCR topics.

Essay Contests

There are two active essay contests of relevance to GCR:

How Should Humanity Steer the Future?
Website: http://www.fqxi.org/community/essay
Deadline: 18 April 2014
Sponsors: The Foundational Questions Institute, Jaan Tallinn, The Peter and Patricia Gruber Foundation, and The John Templeton Foundation, with media partner Scientific American

Preparing for the Distant Future of Civilization
Website: http://www.bmsis.org/essaycontest
Deadline: 22 April 2014
Sponsor: Blue Marble Space Institute of Science

Upcoming Talks At United Nations

Seth Baum will deliver two talks at the United Nations in upcoming weeks, both on nuclear weapons issues.

15 April: An experts meeting of the UN Security Council “P5” Permanent Members, hosted by the United States. Talk title: “Nuclear winter and the search for safer deterrence”.

1 May: The Preparatory Committee for the 2015 Nuclear Non-Proliferation Treaty Review Conference. Part of a session “Accidental apocalypse: Probabilistic approaches to accidental nuclear war and human survival” hosted by People for Nuclear Disarmament. Talk title: “The risk of inadvertent nuclear war between the United States and Russia”. Other speakers: Ward Wilson (Rethinking Nuclear Weapons Project); Steven Starr (Physicians for Social Responsibility), & Dominique Lalanne (Armes Nucleaires Stop).

Call for Papers: Confronting Future Catastrophic Threats To Humanity

The journal Futures is preparing a new special issue on the topic Confronting Future Catastrophic Threats To Humanity, co-edited by Seth Baum of GCRI and Bruce Tonn of University of Tennessee. The call for papers states: “This special issue seeks to identify and discuss opportunities for action now that can help humanity prepare for catastrophic threats it may face in the future. Of interest are both actions to prevent the catastrophes, or to reduce their probability, and actions to help humanity endure them.”

Initial paper submissions are due 1 September 2014. For further details please see the call for papers web page.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.