Summer Newsletter: New Publications

by | 21 August 2014

Dear friends,

There have been a number of interesting new publications coming out of the GCR research community recently, both from GCRI and elsewhere. They cover a range of topics, from environmental risks to artificial intelligence to refuges for protecting against unknown threats. For your interest, I’m putting a summary of the publications down at the bottom of the newsletter. These are good times for GCR research, and they are about to get better next year when the Futures special issue comes out. That is, assuming we get good submissions! The submission deadline is September 1. I’m expecting plenty of good submissions, but we’ll see. Meanwhile, if you have research you’d like to see featured in a future newsletter, please let me know.

As always, thank you for your interest in our work. We welcome any comments, questions, and criticisms you may have.

Sincerely,
Seth Baum, Executive Director

GCR News Summaries

Robert de Neufville’s latest news summaries are available here: GCR News Summary May 2014, GCR News Summary June 2014, and GCR News Summary July 2014. As always, these summarize recent events across the breadth of GCR topics.

Submissions Now Open: Confronting Future Catastrophic Threats To Humanity

Authors with articles for the Futures special issue Confronting Future Catastrophic Threats To Humanity can now submit their articles. The call for papers says submissions will open on 15 August, but they are already open. Please use the regular Futures submission process and select the special issue under “article type”. Submissions are still due 1 September 2014.

Futurability App For Android & iPhone Testing

GCRI colleague Itsuki Handoh of the Research Institute for Humanity & Nature is developing a new smartphone app “Value-Action Net for Futurability” as part of the Consilience Cyberspace project. Futurability is a concept related to sustainability that RIHN is leading the development of. The app helps users connect with each other towards advancing futurability and helps researchers understand how social networks can be leveraged towards futurability. You can help out by installing the app on your phone and helping test it. It is available for Android and iPhone.

GCR At Sydney Festival Of Dangerous Ideas

The upcoming Festival Of Dangerous Ideas at the Sydney Opera House will include several events related to GCR.
* We Are Risking Our Existence, featuring GCRI colleagues Jaan Tallinn and Huw Price, 31 August.
* Human Existence Doesn’t Matter, featuring Huw Price, Rebecca Newberger Goldstein, and Francesca Minerva, 30 August.
* The End Of The World As We Know It, featuring Jaan Tallinn, Tim Flannery, Elizabeth Kolbert, and Steven Pinker, 30 August.

New GCR Publications

First, GCRI has one new full research article out, plus two shorter academic reviews.

The full article is Integrating the planetary boundaries and global catastrophic risk paradigms by Seth Baum and Itsuki Handoh. GCR and PBs are two major paradigms to emerge in recent years for studying global threats to humanity and nature. This paper presents a conceptual framework called Boundary Risk for Humanity and Nature (BRIHN) that pull PBs and GCR together. BRIHN combines probabilistic thinking from GCR and systemic resilience thinking from PBs. The paper uses the case study of the phosphorus biogeochemical cycle to illustrate BRIHN.

One review is a book review by Seth Baum of Only One Chance: How Environmental Pollution Impairs Brain Development – and How to Protect the Brains of the Next Generation by Philippe Grandjean. The book describes the threat of chemical pollution to early brain development. It is not clear from the book how severe of a global catastrophe chemical pollution could be. Part of the issue, which the book carefully documents, is that the chemicals industry prevents research on chemical pollution.

The other review is a film review by Seth Baum of Transcendence, a new film about an artificial intelligence taking over the world. While the film is fictional, it raises very real issues about AI risk. One question is how to manage AI research and development, given the risks but also given the various people involved. Another question is whether the world might be better off with an AI in control, given the sacrifices humanity might need to make to stay in control.

Now, some research from outside GCRI. Please note, this is not an exhaustive compilation of recent GCR research. There is simply too much going on for that!

Continuing on AI risk, Nick Bostrom has a new book out Superintelligence: Paths, Dangers, Strategies. The book covers a variety of issues relating to the possibility of AI outsmarting humanity and taking over the world. Reviews of the book can be found at (among other places) Overcoming Bias (Robin Hanson’s blog), The Guardian, & The Financial Times. See also the book’s pages on Amazon and Wikipedia.

Bostrom and Vincent Müller also have a new paper Future progress in artificial intelligence: A poll among experts. Since superintelligent AI involves unprecedented and speculative technologies, there is disagreement about if or when it will arrive. This paper polls several groups of AI researchers, including some who focus on this type of AI. The results show that many experts give a significant probability to superintelligence arriving sometime this century, but with significant variation among them.

Bruce Tonn and Dorian Stiefel have a new paper Human extinction risk and uncertainty: Assessing conditions for action. The paper provides a framework for translating ethical views about human extinction risk into levels of action to reduce the risk. The levels of action range from doing nothing to orienting the entire global economy towards reducing the risk. The framework uses a treatment of imprecise probability involving lower and upper probability bounds for the risk.

Kathleen Vogel & Christine Knight have a new paper Analytic outreach for intelligence: Insights from a workshop on emerging biotechnology threats. The paper describes the author’s engagement with the intelligence sector on bioweapons and biotechnology threats. The paper is the latest in the authors’ series of publications on the social side of bioweapons, together with colleague Sonia Ben Ouagrham-Gormley. The current paper is a good example of how scholarly research on GCR can be translated into actual risk reductions through engagement with relevant professional sectors.

Karim Jebari has a new paper Existential risks: Exploring a robust risk reduction. The paper focuses on global catastrophes that come as a surprise, like the proverbial black swan. Such catastrophes pose challenges to risk management based on known risks. The paper proposes isolated, self-sufficient, and continuously inhabited underground refuges as a solution that can protect against a variety of catastrophes, including some surprises.

Michael Mills, Owen Toon, Julia Lee-Taylor and Alan Robock have a new paper Multi-decadal global cooling and unprecedented ozone loss following a regional nuclear conflict. This paper extends the group’s earlier work on the science of nuclear winter with a more sophisticated Earth system model. This paper, along with many of their previous papers, focuses on a scenario in which India and Pakistan each launch 50 weapons of 15 kiloton yield (similar to Hiroshima/Nagasaki) at each other’s cities. The paper finds major cooling lasting for 25 years plus major ozone loss worldwide.

Mark Neal has a new paper Preparing for Extraterrestrial Contact. The paper argues for attention to extraterrestrial life within the fields of risk and disaster management. The paper describes the field of astrobiology and its increasingly mainstream scientific support. The paper then discusses five scenarios, from extraterrestrial life not existing to extraterrestrial life being more advanced than humanity. The latter poses the greatest risks, and in some ways resembles risks from superintelligent AI.

Finally, Alexander Glaser, Boaz Barak and Robert J. Goldston have a new paper A zero-knowledge protocol for nuclear warhead verification. The paper presents a new technique for verifying nuclear weapons stockpiles. The technique offers a solution to a basic paradox of verification: inspectors are supposed to confirm what nuclear weapons are present without learning anything about the weapons, because learning about them could pose a proliferation risk. If successful, this new technique could overcome a significant technical hurdle for nuclear disarmament.

References

Baum, Seth D. and Itsuki C. Handoh. Integrating the planetary boundaries and global catastrophic risk paradigms. Ecological Economics, forthcoming, DOI: 10.1016/j.ecolecon.2014.07.024.

Baum, Seth D. Book review: Only One Chance: How Environmental Pollution Impairs Brain Development – and How to Protect the Brains of the Next Generation (pdf). Environmental Science & Policy, forthcoming, DOI: 10.1016/j.envsci.2014.07.001.

Baum, Seth D. Film review: Transcendence (pdf). Journal of Evolution and Technology, forthcoming.

Bostrom, Nick, 2014. Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Glaser, Alexander, Boaz Barak and Robert J. Goldston, 2014. A zero-knowledge protocol for nuclear warhead verification. Nature 510 (26 June), 497-502.

Jebari, Karim, 2014. Existential risks: Exploring a robust risk reduction. Science & Engineering Ethics, forthcoming, doi:10.1007/s11948-014-9559-3.

Mills, Michael J., Owen B. Toon, Julia Lee-Taylor and Alan Robock, 2014. Multi-decadal global cooling and unprecedented ozone loss following a regional nuclear conflict. Earth’s Future 2(4), 161-176.

Müller, Vincent C. and Nick Bostrom, 2014. Future progress in artificial intelligence: A poll among experts. In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Berlin: Springer.

Neal, Mark, 2014. Preparing for Extraterrestrial Contact. Risk Management, 16(2): 63-87.

Tonn, Bruce and Dorian Stiefel, 2014. Human extinction risk and uncertainty: Assessing conditions for action. Futures, forthcoming, DOI: 10.1016/j.futures.2014.07.001.

Vogel, Kathleen and Christine Knight, 2014. Analytic outreach for intelligence: Insights from a workshop on emerging biotechnology threats. Intelligence and National Security, forthcoming, DOI: 10.1080/02684527.2014.887633.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.