Artificial Intelligence Needs Environmental Ethics

by | 16 November 2021

Download Preprint PDF

Artificial intelligence is an interdisciplinary topic. As such, it benefits from contributions from a wide range of disciplines. This short paper calls for greater contributions from the discipline of environmental ethics and presents several types of contributions that environmental ethicists can make.

First, environmental ethicists can raise the profile of the environmental dimensions of AI. For example, discussions of the ethics of autonomous vehicles have thus far focused mainly on “trolley problem” scenarios in which the vehicle must decide whom to harm in hypothetical crash scenarios. Less attention has been paid to the environmental impacts of autonomous vehicles, even though the environmental impacts are arguably much more important. Environmental ethicists could make the moral case for attention to this and to other environmental issues involving AI.

Second, environmental ethicists can help analyze novel ethical situations involving AI. These situations specifically involve artificial versions of phenomena that have long been studied in environmental ethics research. For example, AI and related technology could result in the creation of artificial life and artificial ecosystems. Environmental ethicists have often argued that “natural” life and ecosystems have important moral value. Perhaps the same reasoning would apply to artificial life and ecosystems, or perhaps it would not due to their artificiality. Environmental ethicists can help evaluate these sorts of novel ethical issues. Such work is especially important because existing work on AI ethics has focused more narrowly on issues centered on humans; environmental ethicists can help make the case for a broader scope.

Third, environmental ethicists can provide valuable perspectives on the future-orientation of certain AI issues. Within the communities of people working on AI issues, there is a divide between people focused on near-term AI issues and people focused on long-term AI issues. Global catastrophic risks associated with AI are often linked to the long-term AI issues. Similar debates exist on environmental issues, due to the long-term nature of major environmental issues such as global warming, natural resource depletion, and biodiversity loss. Environmental ethicists have made considerable progress on the ethics of the future, which can be applied to debates about AI.

The paper builds on prior GCRI research and experience as environmental ethicists working on AI. Moral consideration of nonhumans in the ethics of artificial intelligence documents the tendency for work on AI ethics to focus on humans and calls for more robust attention to nonhumans. Reconciliation between factions focused on near-term and long-term artificial intelligence describes the debate between those favoring attention to near-term AI issues and those favoring attention to long-term AI issues. Artificial intelligence, systemic risks, and sustainability analyzes risks associated with near-term applications of AI in sectors related to environmental sustainability such as agriculture and forestry.

Academic citation:
Baum, Seth D. and Andrea Owe, 2023. Artificial intelligence needs environmental ethics. Ethics, Policy, & Environment, vol. 26, no. 1, pages 139-143, DOI 10.1080/21550085.2022.2076538.

Download Preprint PDFView in Ethics, Policy, & Environment

Image credit: Buckyball Design

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.