Superintelligence Skepticism as a Political Tool

by | 24 August 2018

View in Information

For decades, there have been efforts to exploit uncertainty about science and technology for political purposes. This practice traces to the tobacco industry’s effort to sow doubt about the link between tobacco and cancer, and it can be seen today in skepticism about climate change and other major risks. This paper analyzes the possibility that the same could happen for the potential future artificial intelligence technology known as superintelligence.

Artificial superintelligence is AI that is much smarter than humans. Current AI is not superintelligent. Some people believe that superintelligence can be built, and that if built, it would have extreme consequences, which could be either good or bad depending on its design. However, other people are skeptical of these claims, and of the claim that this issue is important enough to merit attention today. This skepticism could be the basis for politicized skepticism such as exists for other issues.

The paper examines current superintelligence skepticism and finds that it is sometimes used politically, but not to nearly the same extent as is found for issues like climate change. Some AI researchers appear to profess superintelligence skepticism in order to protect the reputation and funding of their field. Some AI technology corporations show hints of politicized skepticism, but not to any significant extent. However, if superintelligence skepticism is politicized, then it could be very successful, including due to the difficulty of resolving uncertainty about this possible future technology.

The paper is part of an ongoing effort by the Global Catastrophic Risk Institute to accelerate the study of the social and policy dimensions of AI by leveraging insights from other fields. Other examples include the paper On the promotion of safe and socially beneficial artificial intelligence, which leverages insights from environmental psychology to study how to motivate AI researchers to pursue socially beneficial AI designs, and ongoing research modeling the risk of artificial superintelligence (see this, this, and this), which leverage risk analysis techniques that GCRI previously used for the risk of nuclear war. This capacity to leverage insights from other fields speaks to the value of GCRI’s cross-risk approach to the study of global catastrophic risk.

Academic citation:
Seth D. Baum, 2018. Superintelligence skepticism as a political tool. Information, vol. 9, no. 9, article 209, DOI 10.3390/info9090209.

View in Information

Image credit: Melissa Thomas Baum

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.