Collective Action on Artificial Intelligence: A Primer and Review

by | 15 July 2021

Download Preprint PDF

The development of safe and socially beneficial artificial intelligence (AI) will require collective action: outcomes will depend on the actions that many different people take. In recent years, a sizable but disparate literature has looked at the challenges posed by collective action on AI, but this literature is generally not well grounded in the broader social science literature on collective action. This paper advances the study of collective action on AI by providing a primer on the topic and a review of existing literature. It is intended to get an interdisciplinary readership up to speed on the topic, including social scientists, computer scientists, policy analysts, government officials, and other interested people.

The primer describes the theory of collective action and relates it to different types of AI collective action situations. A primary distinction is between situations in which individual and collective interests diverge, as in the prisoner’s dilemma or adversarial AI competition, and in which they converge, as in coordination problems such as establishing common platforms for AI. In general, collective action is easier to achieve when interests converge, because individual actors pursuit of their own self-interest can lead to outcomes that are worse for the group as a whole. The primer also explains how AI collective action situations depend both on whether the goods involved are excludable or rivalrous and whether they hinge on the action of a single actor or on some combination of actors.

One major focus of the AI collective action literature identified in this paper are potentially dangerous AI race scenarios. AI races are not necessarily dangerous and might even hasten the arrival of socially beneficial forms of AI, but could be dangerous if individual actors’ interest in developing AI quickly diverges from the collective interest in ensuring that AI is safe and socially beneficial. The paper looks at both near-term and long-term AI races. The literature identified in this paper looks in particular at near-term races to develop military applications and at long-term AI races to develop advanced forms of AI such as artificial general intelligence and superintelligence. The two types of races are potentially related, since near-term races could affect the long-term development of AI.

Finally, the paper evaluates different types of potential solutions to collective action problems. The collective action literature identifies three major types of solution: government regulation, private markets, and community self-organizing. All three types of solution can advance AI collective action, but no single type is likely to address the entire range of AI collective action problems. Instead of looking for narrow, silver-bullet solutions it may be to pursue a mix of solutions that AI collective action issues in different ways and at different scales. Governance regimes should also account for other factors that could affect collective action, such as the extent to which AI developers are transparent about their technology.

AI collective action issues are increasingly pressing. Collective action will be necessary to ensure that AI serves the public interest rather than just the narrow private interests of those who develop it. Collective action will also be necessary to ensure that AI is developed with adequate safety measures and risk management protocols. Further work could provide more detailed analysis and support practical progress on AI collective action issues.

This paper has also been summarized in the AI Ethics Brief #71 of the Montreal AI Ethics Institute.

This paper extends GCRI’s interdisciplinary research on AI. It builds on GCRI’s prior work on the governance of AI, particularly the papers On the promotion of safe and socially beneficial artificial intelligence and Lessons for artificial intelligence from other global risks.

Academic citation:
Robert de Neufville and Seth D. Baum, 2021. Collective action on artificial intelligence: A primer and review. Technology in Society, vol. 66, (August), article 101649, DOI 10.1016/j.techsoc.2021.101649.

Download Preprint PDFView in Technology in Society

Image credit: Volodymyr Goinyk

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.