October Newsletter: How to Reduce Risk

by | 6 October 2017

Dear friends,

As we speak, a group of researchers is meeting in Gothenburg, Sweden on the theme of existential risk. I joined it earlier in September. My commendations to Olle Häggström and Anders Sandberg for hosting an excellent event.

My talk in Gothenburg focused on how to find the best opportunities to reduce risk. The best opportunities are often a few steps removed from academic risk and policy analysis. For example, there is a large research literature on climate change policy, much of which factors in catastrophic risk. However, the United States still has little in the way of actual climate policy, which is due to our political process, not to any shortcomings in the research. Likewise, some of the best opportunities to reduce climate change risk involve engaging with the political process, often in ways that are unrelated to climate change. Yet at the same time, risk analysis is still needed to understand how much these opportunities can reduce the risk by.

This is why GCRI’s flagship integrated assessment project connects risk analysis to real-world efforts to reduce the risk. By integrating risk analysis and engagement with people involved in the risks, we can figure out how best to reduce the risks in practice. This is our core goal.

In a new paper “Towards an Integrated Assessment of Global Catastrophic Risk”, Tony Barrett and I describe our integrated assessment in detail. The paper goes from the conceptual foundations of risk to the analysis of specific risks to a suite of approaches for risk reduction in practice. The paper synthesizes much of our thinking on how to advance progress on global catastrophic risk. It is based on a talk we gave earlier this year at another great event on catastrophic risk, at the UCLA Garrick Institute for Risk Sciences.

Sincerely,
Seth Baum, Executive Director

General Risk

GCRI Executive Director Seth Baum and GCRI Director of Research Tony Barrett have a paper related to GCRI’s Integrated Assessment Project titled “Towards an Integrated Assessment of Global Catastrophic Risk” forthcoming in B.J. Garrick’s edited volume, Catastrophic and Existential Risk: Proceedings of the First Colloquium.

GCRI Executive Director Seth Baum and GCRI Director of Research Tony Barrett also have a paper titled “Global Catastrophes: The Most Extreme Risks” forthcoming in Vicki Bier’s edited volume Risk in Extreme Environments: Preparing, Avoiding, Mitigating, and Managing.

Artificial Intelligence

GCRI Executive Director Seth Baum has a paper on “Social Choice Ethics in Artificial Intelligence” forthcoming in AI & Society.

GCRI Junior Associate Trevor White and GCRI Executive Director Seth Baum have a paper titled “Liability Law for Present and Future Robotics Technology” in Patrick Lin, Keith Abney, and Ryan Jenkins new edited volume, Robot Ethics 2.0.

GCRI Director of Communications Robert de Neufville discussed artificial intelligence risk and artificial intelligence safety on the NonProphets podcast with Centre for the Study of Existential Risk Research Associate Shahar Avin.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.