January Newsletter: Superintelligence & Hawaii False Alarm

by | 29 January 2018

Dear friends,

This month marks the release of Superintelligence, a special issue of the journal Informatica co-edited by GCRI’s Matthijs Maas and Roman Yampolskiy along with Ryan Carey and Nell Watson. It contains an interesting mix of papers on AI risk. One of the papers is “Modeling and Interpreting Expert Disagreement About Artificial Superintelligence”, co-authored by Yampolskiy, Tony Barrett, and myself. This paper applies our ASI-PATH risk model to an ongoing debate between two leading AI risk experts, Nick Bostrom and Ben Goertzel. It shows how risk analysis can capture key features of the debate to guide important AI risk management decisions.

This January also saw a nuclear war false alarm in Hawaii. The Hawaii Emergency Management Agency accidentally sent out text messages stating “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” GCRI Director of Communications Robert de Neufville lives in Honolulu and experienced this incident firsthand. You can read his vivid description of the incident here. It is a good reminder of the terror that can come in a world with nuclear weapons. In upcoming months, GCRI will release new research on the risk of nuclear war that shows how to use events like this to analyze the risk of nuclear war.

Sincerely,
Seth Baum, Executive Director

General Risk

GCRI hosted its largest-ever series of symposia on global catastrophic risk at the 2017 Society for Risk Analysis (SRA) conference in December, prompting SRA to encourage us to lead the formation of an official global catastrophic risk group within SRA.

GCRI executive director Seth Baum and director of research Tony Barrett have a paper titled “Towards an Integrated Assessment of Global Catastrophic Risk” in B.J. Garrick’s forthcoming edited volume, Proceedings of the First Colloquium on Catastrophic and Existential Risk.

GCRI associate Dave Denkenberger has a paper with Alexey Turchin titled “Global Catastrophic and Existential Risks Communications Scale” proposing a Torino Scale for catastrophic and existential risks forthcoming in Futures.

Artificial Intelligence

GCRI associate Roman Yampolskiy and junior associate Matthijs Maas along with Ryan Carey and Nell Watson edited a special issue of Informatica on superintelligence. GCRI executive director Seth Baum, director of research Tony Barrett, and Yampolskiy contributed a paper titled “Modeling and Interpreting Expert Disagreement About Artificial Superintelligence”. GCRI associate Dave Denkenberger also contributed a paper with Mikhail Batin, Alexey Turchin, Sergey Markov, Alisa Zhila titled  “Artificial Intelligence in Life Extension: from Deep Learning to Superintelligence”.

GCRI junior associate Matthijs Maas is presenting a paper titled “Regulating for ‘Normal AI Accidents’: Operational Lessons for the Responsible Governance of AI Deployment” at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society in New Orleans in February.

Food Security

GCRI associate Dave Denkenberger gave a presentation titled “Feeding the Earth If There Is a Global Agricultural Catastrophe” at the International Food Policy Research Institute and at the Society for Risk Analysis (SRA) conference in December in Washington, DC.

Volcano Eruptions

GCRI associate Dave Denkenberger has a paper with Robert W. Blair, Jr. titled “Interventions That May Prevent or Mollify Supervolcanic Eruptions” in forthcoming in Futures.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.