May Newsletter: The Value of GCR Research

by | 12 May 2017

Dear friends,

People often ask me why we set GCRI up as a think tank instead of something for more direct action at reducing the risk. The reason is that when it comes to the global catastrophic risks, a little bit of well-designed research goes a long way. It helps us make better decisions about how to reduce the risks.

For example, last week I attended a political science workshop at Yale University on how to cost-effectively spend $10 billion to reduce the probability of war between the great powers. We discussed many great ideas for things like reducing misperceptions and avoiding conflicts over other important countries. But the best idea may be a good research agenda. If that much money is to be spent, then first we should spend at least a few hundred thousand to get more confidence in how best to spend the rest.

A new paper by GCRI’s Tony Barrett develops this idea further. The paper, forthcoming in the journal Decision Analysis, applies the concept of value of information (VOI) to global catastrophic risk. Research has high VOI to the extent that it improves decision making. For example, if research shows you can get a 1% risk reduction via a $20 million project instead of a $25 million project, then the research is worth $5 million, even if the research itself costs much less. The paper outlines how an integrated assessment of global catastrophic risk could yield especially high value information by producing decision-relevant information across the full breadth of the risk.

The VOI perspective speaks to why GCRI is a think tank and how our research agenda is designed: we see great opportunities for select research to improve decision making on global catastrophic risk.

Sincerely,
Seth Baum, Executive Director

General Risk

GCRI Director of Research Tony Barrett’s new paper on the “Value of GCR Information: Cost Effectiveness-Based Approach for Global Catastrophic Risk (GCR) Reduction” is forthcoming in Decision Analysis (a non-technical summary of the paper is available here). The paper uses a value-of-information (VOI) approach to argue that a comprehensive, integrated assessment of global catastrophic risks and risk-reduction options would greatly help in assessing the effectiveness of GCR reduction and GCR research decisions.

Artificial Intelligence

GCRI Associate Roman Yampolskiy gave a talk titled “Towards Good AI” at the Machine Learning Prague conference on the pathways that could lead to the development of dangerous artificial general intelligence (AGI).

GCRI Director of Research Tony Barrett will give a talk on superintelligence risk and policy analysis at the 2017 Governance of Emerging Technology conference at Arizona State.

Calls for Papers

GCRI associate Roman Yampolskiy and GCRI junior associate Matthijs Maas are among the guest-editors of an special issue of Informatica on superintelligence. They are looking for original research, critical studies, and review articles on topics related to superintelligence. The deadline for submitting papers is August 31, 2017. Final manuscripts are due November 30, 2017.

GCRI Associate Jacob Haqq-Misra is guest-editing a special issue of Futures on the detectability of the future Earth and of terraformed worlds. He is looking for papers that consider the future evolution of the Earth system from an astrobiological perspective as well as how humanity or other technological civilizations could artificially create sustainable ecosystems on lifeless planets. Abstracts of 200-300 words should be sent to Jacob Haqq-MIsra by May 31, 2017.

Help us make the world a safer place! The Global Catastrophic Risk Institute depends on your support to reduce the risk of global catastrophe. You can donate online or contact us for further information.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.