View in the Bulletin of Atomic Scientists

This article describes the disagreement among AI experts over whether near-term or long-term AI issues are more important. It argues that the disagreement can be resolved by focusing on opportunities to address both sets of issues at the same time. The article looks specifically at ways we can address both sets of issues through social norms, technical research, and public policy.

The article begins as follows:

Artificial intelligence experts are divided over the threat of superintelligent computers. One group argues that even though these machines may be decades or centuries away, the scale of the catastrophe they could cause is so great, we should take action now to prevent them from taking over the world and killing everyone. Another group dismisses the fear of superintelligent computers as speculative and premature, and prefers to focus on existing and near-future AI. In fact, though, these two sides may have more common ground than they think. It’s not necessary to choose between focusing on long-term and short-term AI threats, when there are actions we can take now to simultaneously address both.

A superintelligent computer would be one that was smarter than humans across a wide range of cognitive domains. Concern about superintelligence dates to Irving Good’s 1966 paper “Speculations concerning the first ultraintelligent machine,” which posited that such AI would be “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” More recently, philosopher Nick Bostrom (in his book Superintelligence) and tech celebrities like Bill Gates and Elon Musk have thrust the issue into the spotlight, arguing that superintelligence poses a grave danger we should worry about right now.

All this focus on superintelligence, though, has been poorly received by many people in the AI community. Existing AI technology is nowhere close to being superintelligent, but does pose current and near-term challenges. That makes superintelligence at best irrelevant and at worst “a distraction from the very real problems with artificial intelligence today,” as Microsoft principal researcher Kate Crawford put it in a recent op-ed in The New York Times. Near-term AI problems include the automation of race and gender bias, excessive violence from military robots, and injuries from robotics used in medicine, manufacturing, and transportation, in particular self-driving cars.

The remainder of the article is available in the Bulletin of Atomic Scientists.

Image credit: Buckyball Design


This blog post was published on 28 July 2020 as part of a website overhaul and backdated to reflect the time of the publication of the work referenced here.