Moral Consideration of Nonhumans in the Ethics of Artificial Intelligence

by | 7 June 2021

Download Preprint PDF

In the ethics of artificial intelligence, a major theme is the challenge of aligning AI to human values. This raises the question of the role of nonhumans. Indeed, AI can profoundly affect the nonhuman world, including nonhuman animals, the natural environment, and the AI itself. Given that large parts of the nonhuman world are already under immense threats from human affairs, there is reason to fear potentially catastrophic consequences should AI R&D fail to account for nonhumans, for example with AI systems for industrial and commercial infrastructure, or future artificial general intelligence (AGI). This paper documents the state of attention to nonhumans within the field of AI ethics and presents an argument for giving nonhumans adequate attention.

The paper specifically examines the extent to which nonhumans are given moral consideration in AI ethics, and the extent to which they should be. Moral consideration of nonhumans means actively valuing nonhumans for their own sake—in philosophy terms, intrinsically valuing them. Unfortunately, the paper finds that most work in AI ethics ignore nonhumans, or value nonhumans only for the effects they have on humans. This leaves apt opportunity for the development and use of AI that adversely impacts nonhumans.

The paper documents moral consideration of nonhumans in academic AI ethics research, statements of AI ethics principles, AGI R&D projects, and select initiatives to design, build, apply, and govern AI. Aside from a line of research on the moral status of AI, the field of AI ethics generally fails to give moral consideration to nonhumans: The paper finds no attention to nonhumans in 76 of 84 sets of AI ethics principles surveyed by Jobin et al., 40 of 45 AGI R&D projects surveyed by Baum, 38 of 44 chapters in the Oxford Handbook of Ethics of AI, and 13 of 17 chapters in the anthology Ethics of Artificial Intelligence. In the latter two, any dedicated attention is on the moral status of AI. No other types of nonhumans are given dedicated attention.

More could be done. The Microsoft AI for Earth program is a good example of AI used in ways that benefit nonhumans. It supports several programs for environmental protection and biodiversity conservation that give explicit moral consideration to nonhumans, including Wild Me, eMammal, NatureServe, and Zamba Cloud. Other AI groups could run similar programs. Within AI ethics research, the paper outlines ideas for nonhuman algorithmic bias, such as by applying ecolinguistics to biases in natural language processing. While algorithmic bias is a major topic in AI ethics, other literature has focused on social biases, but research in ecolinguistics show that English—the primary language for AI system design—contains biases in favor of humans over nonhumans.

Given the limited moral consideration of nonhumans in the current field of AI ethics, the paper argues for more consistent and extensive moral consideration of nonhumans. The argument draws on concepts of ontological and ethical anthropocentrism as developed in environmental ethics. Humans are members of the animal kingdom and part of nature, and there is no sound basis for restricting moral consideration exclusively to humans. There are important questions of how much moral consideration to give to nonhumans relative to humans. The paper sets these questions aside to call for a more basic improvement in moral consideration to nonhumans across AI ethics.

This paper extends GCRI’s interdisciplinary research on AI. It builds on prior GCRI research on AI ethics, especially the paper Social Choice Ethics in Artificial Intelligence. It uses ethics data compiled in our 2017 and 2020 surveys of AGI R&D projects, especially project goals. It also continues our tradition of applying the rich body of environmental research to new AI issues, as previously done in our papers On the Promotion of Safe and Socially Beneficial Artificial Intelligence and Lessons for artificial intelligence from other global risks.

This paper has also been summarized in the AI Ethics Brief #73 of the Montreal AI Ethics Institute, and in the blog of Stanford MAHB. It is also included in the 2022 The AI Ethics Report and is discussed in the MEDIUM article “Is 2022 the Year that AI Ethics Takes Sustainability Seriously?”.

Academic citation:
Owe, Andrea and Seth D. Baum, 2021. Moral consideration of nonhumans in the ethics of artificial intelligence. AI & Ethics, vol. 1, no. 4 (November), pages 517-528, DOI 10.1007/s43681-021-00065-0.

Download Preprint PDFView in AI & EthicsView in ReadCube

Image credit: Buckyball Design

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.