The Ethics of Sustainability for Artificial Intelligence

by | 17 November 2021

Download PDF

AI technology can have significant effects on domains associated with sustainability, such as certain aspects of human society and the natural environment. Sustainability itself is widely regarded as a good thing, including in recent initiatives on AI and sustainability. There is therefore a role for ethical analysis to clarify what is meant by sustainability and the ways in which sustainability in the context of AI might or might not be good. This paper provides a foundational ethical analysis of sustainability for AI, describes the ethical basis of the existing body of work on AI and sustainability, and presents an argument for a specific ethical view on AI and sustainability. The paper is part of the conference AI for People: Towards Sustainable AI, CAIP’21.

As the paper explains, sustainability is not an inherently ethical concept. “Sustainability” simply refers to the ability of something to continue over time; the thing to be sustained can be good, bad, or neutral. Common usage of the term “sustainability” assumes that the thing to be sustained is some combination of social and ecological systems. The term is sometimes also used in other ways, such as to refer to the sustainability of a business or organization, or the sustainability of an AI system. The paper argues that usage of the term “sustainability” should address three ethics questions. First, what should be able to be sustained, and why? Second, for how long should it be able to be sustained? Third, how much effort should be made for sustainability?

The paper further distinguishes between sustainability and optimization. Making something sustainable means giving it the potential to continue existing in at least some minimal form. In contrast, optimizing something means putting it in the best form that it can have. Therefore, sustainability may be considered a basic minimum standard of conduct toward future time periods, whereas optimization may be considered a more substantial goal. In common usage, sustainability is treated as a good thing, but it may be better understood as a not-terrible thing. If human civilization has to focus on sustaining itself rather than on loftier goals like optimization, then it is in a very bad situation.

With this theoretical perspective in place, the paper surveys prior work on AI and sustainability. It examines published sets of AI ethics principles and academic research on AI and sustainability. The paper finds that most work on AI and sustainability focuses on common conceptions of environmental sustainability, although some work has been done on the sustainability of AI systems and other things. Additionally, most work is ultimately oriented toward sustaining human populations, with AI and the environment having value insofar as they support human populations. Finally, most work lacks well-specified the ethical foundations, with no clear answers to the three questions listed above.

The paper then provides its own answers to the three questions. First, it argues for sustaining both humans and nonhumans. Second, it argues for sustainability over long time scales, including the astronomically distant future. Third, it argues for a large amount of effort toward sustainability. It additionally calls for emphasizing optimization over sustainability in cases where the two diverge.

Finally, the paper presents implications for AI. One is that AI should be used to improve long-term sustainability and optimization, such as by reducing global catastrophic risk. Another is that attention should be paid to long-term forms of AI, which could be particularly consequential for long-term sustainability and optimization. These AI topics only partial overlap with what is typically considered within the realm of AI and sustainability, but the paper argues that these topics are a more appropriate focus for work on AI and sustainability.

The paper extends GCRI’s research on AI ethics, especially the papers Moral consideration of nonhumans in the ethics of artificial intelligence and Reconciliation between factions focused on near-term and long-term artificial intelligence. It additionally builds on GCRI’s research on sustainability and environmental risks, especially Integrating the planetary boundaries and global catastrophic risk paradigms.

This paper has also been summarized in the AI Ethics Brief #85 of the Montreal AI Ethics Institute and is included in the 2022 The State of AI Ethics Report. The paper is also discussed in the MEDIUM article “Is 2022 the Year that AI Ethics Takes Sustainability Seriously?”.

Academic citation:
Owe, Andrea and Seth D. Baum, 2021. The ethics of sustainability for artificial intelligence. In Philipp Wicke, Marta Ziosi, João Miguel Cunha, and Angelo Trotta (Editors), Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI (CAIP 2021), Bologna, pages 1-17, DOI 10.4108/eai.20-11-2021.2314105.

Download PDFView in Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI (CAIP 2021)

Access the data used in the paper

Image credit: Max Pixel

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.