Tackling Near and Far AI Threats at Once

by | 6 October 2016

View in the Bulletin of Atomic Scientists

This article describes the disagreement among AI experts over whether near-term or long-term AI issues are more important. It argues that the disagreement can be resolved by focusing on opportunities to address both sets of issues at the same time. The article looks specifically at ways we can address both sets of issues through social norms, technical research, and public policy.

The article begins as follows:

Artificial intelligence experts are divided over the threat of superintelligent computers. One group argues that even though these machines may be decades or centuries away, the scale of the catastrophe they could cause is so great, we should take action now to prevent them from taking over the world and killing everyone. Another group dismisses the fear of superintelligent computers as speculative and premature, and prefers to focus on existing and near-future AI. In fact, though, these two sides may have more common ground than they think. It’s not necessary to choose between focusing on long-term and short-term AI threats, when there are actions we can take now to simultaneously address both.

A superintelligent computer would be one that was smarter than humans across a wide range of cognitive domains. Concern about superintelligence dates to Irving Good’s 1966 paper “Speculations concerning the first ultraintelligent machine,” which posited that such AI would be “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” More recently, philosopher Nick Bostrom (in his book Superintelligence) and tech celebrities like Bill Gates and Elon Musk have thrust the issue into the spotlight, arguing that superintelligence poses a grave danger we should worry about right now.

All this focus on superintelligence, though, has been poorly received by many people in the AI community. Existing AI technology is nowhere close to being superintelligent, but does pose current and near-term challenges. That makes superintelligence at best irrelevant and at worst “a distraction from the very real problems with artificial intelligence today,” as Microsoft principal researcher Kate Crawford put it in a recent op-ed in The New York Times. Near-term AI problems include the automation of race and gender bias, excessive violence from military robots, and injuries from robotics used in medicine, manufacturing, and transportation, in particular self-driving cars.

The remainder of the article is available in the Bulletin of Atomic Scientists.

Image credit: Buckyball Design


This blog post was published on 28 July 2020 as part of a website overhaul and backdated to reflect the time of the publication of the work referenced here.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.