From AI for People to AI for the World and the Universe

by | 1 December 2021

Download Preprint PDF

Work on the ethics of artificial intelligence often focuses on the value of AI to human populations. This is seen, for example, in initiatives on AI for People. These initiatives do well to identify some important AI ethics issues, but they fall short by neglecting the ethical importance of nonhumans. This short paper calls for AI ethics to better account for nonhumans, such as by giving initiatives names like “AI for the World” or “AI for the Universe”. The paper is part of a collection “AI for People” to be published as a special issue of the journal AI & Society.

The paper grounds its arguments in fundamental moral philosophy concepts. As the paper explains, humans are not the only entities with morally relevant attributes, such as the ability to experience pleasure and pain or the possibility of having a life worth living. The fact that nonhuman entities possess morally relevant attributes is a strong reason to morally value them.

For AI, the stakes can be quite high. Modern AI systems use a lot of energy and other resources, with significant environmental impacts. Additionally, AI technology can be used to address environmental issues, though if not used carefully, it can end up doing more harm than good. Finally, advanced future AI systems, such as runaway superintelligence, could lead to outcomes whose moral value is highly sensitive to how the AI systems account for the moral value of nonhumans.

The paper proposes a twofold effort. First, moral philosophy work on AI should recognize the moral importance of nonhumans and explore the implications of nonhumans for AI ethics. Switching to names like “AI for the World” or “AI for the Universe” is one way to start in this direction. Second, computer science work on AI ethics should develop techniques that enable AI systems to account for the moral value of nonhumans. This could include exploring proxy schemes instead of the existing observational approaches to inferring the values of moral subjects and aligning AI systems to them.

The paper contributes to a significant line of GCRI research on environmental ethics and AI. Moral consideration of nonhumans in the ethics of artificial intelligence and The ethics of sustainability for artificial intelligence document the tendency for work on AI ethics to focus on humans and call for more robust attention to nonhumans. Artificial intelligence, systemic risks, and sustainability analyzes risks associated with near-term applications of AI in sectors related to environmental sustainability such as agriculture and forestry. Social choice ethics in artificial intelligence discusses how to handle nonhumans within common AI ethics paradigms. Finally, Artificial intelligence needs environmental ethics calls for environmental ethicists to contribute their perspectives to AI ethics.

Academic citation:

Baum, Seth D. and Andrea Owe, 2023. From AI for people to AI for the world and the universe. AI & Society, vol. 38, no. 2 (April), pages 679-680, DOI 10.1007/s00146-022-01402-5.

Download Preprint PDFView in AI & SocietyView in ReadCube

Image credit: Pixabay

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.