August Newsletter: Collective Action on AI

by | 3 August 2021

Dear friends,

This month GCRI announces a new research paper, Collective action on artificial intelligence: A primer and review. Led by GCRI Director of Communications Robert de Neufville, the paper shows how different groups of people can work together to bring about AI outcomes that no one individual could bring about on their own. The paper provides a primer on basic collective action concepts, derived mainly from the political science literature, and reviews the existing literature on AI collective action. The paper serves to get people from diverse interdisciplinary backgrounds up to speed on the topic. The paper puts some emphasis on dangerous AI race scenarios, which are a major focus of the AI collective action literature. It also surveys the three major types of solutions to collective action issues: government regulation, private markets, and community self-organizing. AI governance will depend in part on AI collective action. This paper provides important guidance on how to succeed at it.

Sincerely,

Seth Baum, Executive Director

Future AI Governance

GCRI Executive Director Seth Baum gave a virtual talk titled “Setting the stage for future AI governance” to the Center for Human-Compatible Artificial Intelligence (CHAI) (on June 8) and to the Legal Priorities Project (on July 16).

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.