October Newsletter: The Superintelligence Debate

by | 2 October 2018

Dear friends,

When I look at debates about risks from artificial intelligence, I see a lot of parallels with debates over global warming. Both involve global catastrophic risks that are, to a large extent, driven by highly profitable industries. Indeed, today most of the largest corporations in the world are in either the computer technology or fossil fuel industries.

One key difference is that whereas global warming debates have been studied in great detail by many talented researchers, AI debates have barely been studied at all. As someone with a background in both, I am helping transfer insights from global warming debates to the study of AI debates. This month, we are announcing two new papers that do exactly this. They both focus on “superintelligence”, which is a potential form of future AI that could significantly outsmart humans, with major implications for global catastrophic risk.

The first paper, “Superintelligence skepticism as a political tool,” examines the possibility that doubt about the prospect of superintelligence could be sowed intentionally in order to advance political aims such as avoiding government regulation of industry or protecting research funding. The paper draws on the history of politicized skepticism about risky but profitable technologies such as tobacco and fossil fuel (“climate skepticism”, “climate denialism”, etc.). It finds small hints of politicized superintelligence skepticism in current debates and potential for much more, especially if government regulation becomes a serious prospect.

The second paper, “Countering superintelligence misinformation,” studies how to prevent debates about superintelligence from being dominated by bad information. Whereas the first paper lays out the problem, this paper is all about solutions. Extensive psychology research finds that misinformation is difficult to correct in the human mind. Therefore, this paper emphasizes strategies for preventing superintelligence misinformation from spreading in the first place. It also surveys strategies for correcting misinformation after it has spread.

Both papers are published in the open-access journal Information. You can read more about them in the GCRI blog here and here, or read the papers directly in the journal here and here.

Sincerely,
Seth Baum, Executive Director

Artificial Intelligence

GCRI Executive Director Seth Baum has a pair of new superintelligence papers in the open-access journal Information. “Superintelligence skepticism as a political tool” examines the possibility that doubt about the prospect of superintelligence could be sowed intentionally in order to advance political aims such as avoiding government regulation of industry or protecting research funding. “Countering superintelligence misinformation” studies how to prevent debates about superintelligence from being dominated by bad information.

GCRI Associate Roman Yampolskiy gave a talk on AI governance at the Joint Multi-Conference on Human Level Artificial Intelligence August 22-25 in Prague. Yampolskiy also did an interview on the Super Data Science podcast about AI Safety.

GCRI Associate Roman Yampolskiy’s edited volume on the challenges of constructing safe and secure advanced machine intelligence, Artificial Intelligence Safety & Security, came out in August.

Help us make the world a safer place! The Global Catastrophic Risk Institute depends on your support to reduce the risk of global catastrophe. You can donate online or contact us for further information.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.