Medium-Term Artificial Intelligence and Society

by | 3 June 2020

View in Information

Discussion of artificial intelligence tends to focus on either near-term or long-term AI. That includes some contentious debate between “presentists” who favor attention to the near-term and “futurists” who favor attention to the long-term. Largely absent from the conversation is any attention to the medium-term. This paper provides dedicated discussion of medium-term AI and its accompanying societal issues. It focuses on how medium-term AI can be defined and how it relates to the presentist-futurist debate. It builds on GCRI’s prior work on the debate, as well as similar work by other researchers [1].

Roughly speaking, near-term AI is AI that already exists or could readily be developed with existing techniques, such as deep learning, while long-term AI is AI that could radically transform the world, such as human-level artificial general intelligence. The paper addresses exactly how to define the near-, medium-, and long-term for AI. It considers six dimensions that the time periods can be defined in terms of: calendar years, the feasibility or ambitiousness of the AI, the degree of certainty about which AI can or will be built, the degree of sophistication or capability of the AI, the size of the impacts of the AI on human society and the world at large, and finally the urgency of the challenges posed by the AI. The paper finds that discussions of near-term AI tend to emphasize feasibility and certainty, whereas discussions of long-term AI tend to emphasize capability and impacts.

The paper proposes the medium-term AI hypothesis: “There is an intermediate time period in which AI technology and accompanying societal issues are important from both presentist and futurist perspectives”. If the medium-term AI hypothesis is true, then medium-term AI offers a point of common ground between presentists and futurists. Presentists may care about medium-term AI if it is sufficiently feasible and certain, or if it raises societal issues that are similar to issues raised by near-term AI. Futurists may care about medium-term AI because of its influence on long-term AI, including both the technology itself and the surrounding societal context.

The paper examines the medium-term AI hypothesis via analysis of four domains. Governance institutions can be quite durable; near-term AI governance institutions may persist into the medium-term, with implications for long-term AI. Efforts to promote collective action on AI could potentially play out across the near- and medium-terms, with implications for long-term AI, but it is unclear whether they will. Corporate AI development that occurs over the near- and medium-terms could affect long-term AI, especially if companies will have financial incentives to develop precursor technologies to long-term AI, though it is unclear whether they will [2]. Finally, some military and national security communities are already paying attention to long-term AI and are likely to remain active in AI throughout the medium-term, raising some issues that are similar to issues that already exist for near-term military AI.

Two medium-term AI topics are not addressed in detail by the paper. First, medium-term AI can be important for its own sake, separate from its importance to presentists and futurists. Second, medium-term AI computer science techniques are important to consider for their own sake and for their implications for societal issues. These two topics would be worthy subjects of future research.

Academic citation:
Baum, Seth D., 2020. Medium-term artificial intelligence and society. Information, vol. 11, no. 6, article 290, DOI 10.3390/info11060290.

View in Information

[1] See Baum 2018, Cave and Ó hÉigeartaigh 2019, and Prunkl and Whittlestone 2020 on the near-term vs. long-term AI divide, and Parson et al. 2019a, 2019b on medium-term AI.

[2] The profitability of technologies that are precursors to long-term AI has been defined as AGI profit-R&D synergy; see Baum 2017.

Image credit: Buckyball Design

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.