On the Intrinsic Value of Diversity

by , | 17 June 2024

Download PDF Preprint

Diversity is an important ethical concept. It’s also relevant to global catastrophic risk in at least two ways: the risk of catastrophic biodiversity loss and the need for diversity among people working on global catastrophic risk. It’s additionally relevant to scenarios involving extreme good, such as in well-designed advanced AI. However, the ethics of diversity has been remarkably understudied. To help address the full range of issues involving diversity, this paper presents a foundational study of the ethics of diversity. It is published in the philosophy journal Inquiry.

The idea for this paper began during work on our previous paper Nonhuman value: A survey of the intrinsic valuation of natural and artificial nonhuman entities. While researching nonhuman value, Andrea Owe and I were struck by how little research we were finding on diversity. Such a prominent and important concept should be the subject of extensive research, but apparently it wasn’t. So, we decided to pursue it ourselves. It is unusual for us, two interdisciplinary researchers at a catastrophic risk think tank, to conduct foundational philosophical research on a major ethics concept. However, we did have relevant backgrounds in moral philosophy and biodiversity, and we felt that the topic was too important to not engage. We are glad that we did.

The paper is a study of the entire category of diversity. Most discussions of diversity pertain to two specific types of diversity: biodiversity and diversity of human beings (race, gender, religion, nationality, etc., which the paper refers to as sociodiversity). However, the concept of diversity is more general. There can be diversity of products in a store, or planets in a solar system, etc. The paper studies whether diversity itself is valuable, and not just specific types of diversity.

The paper specifically studies whether diversity is intrinsically valuable, meaning that it has value in itself. Diversity can often be valuable for other reasons, such as a diverse set of tools being valuable for accomplishing a task. However, is a diverse set of objects valuable on its own, regardless of its relation to anything else?

To address this question, the paper presents two sets of work. First, the paper surveys the small prior literature on the intrinsic value of diversity, consisting mainly of discussions of biodiversity and sociodiversity, and attempts to draw insights from it. Second, the paper presents three original thought experiments designed to clarify moral intuitions about diversity:

1) The space capsule isolation test. This adapts the isolation test in G.E. Moore’s 1903 book Principia Ethica. In our version, suppose that the universe is about to be destroyed. Before it is destroyed, humanity can take one last action, to send a capsule into outer space. The capsule and its contents will then become all that is left in the universe. Suppose we can put three objects into the capsule. Should we choose a diversity of objects? The paper considers a cup, a ball, and a shoe, as well as three unknown objects called a blargh, a criftula, and a dombit. We find that we would favor putting a diversity of objects into the capsule, implying at least some intrinsic value of diversity.

2) The maximization box. Imagine a box that maximizes the diversity of contents and does not change its contents in any other way. Suppose one can put anything inside the box: a few grains of sand, the entire global human population, the Milky Way galaxy, etc. Should one put things into the maximization box? This addresses the more general question: All else equal, is it good to maximize diversity? We can imagine a case against maximizing diversity based on the idea of a happy medium: not too much or not too little diversity. However, we find ourselves in favor of maximizing diversity based on the idea that more of an intrinsically good thing is generally better.

The maximization box thought experiment raises the question of how diversity should be defined for purposes of moral evaluation. For example, imagine everyone spoke a common first language, and they could have any second language. Each person could have a different second language, as long as new languages can be created for this purpose. That would create the largest number of languages, but there would be no diversity in the number of speakers of a language: each language would have one speaker. More generally, there can be a tension between the diversity of individual elements within a system (e.g., individual languages) and the diversity of the overall system pattern or structure (e.g., the numbers of people speaking each language). We find ourselves tentatively favoring some reconciliation between the diversity of individual system elements and overall system patterns.

3) The cosmic genie. Imagine a genie that will grant a single wish: it will convert the entire cosmos into some configuration that optimizes for moral value. Should one wish for a diverse configuration? If so, how diverse? This addresses the question of how important of an intrinsic value diversity is compared to other intrinsic values. It may be the case that optimizing for other intrinsic values would involve a certain configuration repeated over and over again across the cosmos in a very non-diverse tiling pattern. Increasing the diversity could mean decreasing other intrinsic values. We find ourselves tentatively divided on this. One of us would favor other intrinsic values, using diversity only as a tiebreaker, whereas the other would accept some decline of other intrinsic values to have some diversity, though with diversity only being a small factor.

The cosmic genie relates to some discussions of advanced future technology, especially advanced AI. The idea of tiling the universe with value can be found in early discussions of advanced AI, though issues of diversity were not a particular focus. Given ongoing developments in AI, these seemingly exotic questions may take a practical character. All the more reason to give them careful study. See also GCRI’s work on AI.

GCRI is active in advancing the demographic and intellectual diversity of the field of global catastrophic risk. For further information, please see the GCRI Statement on Race and Intelligence, the GCRI Statement on the Demographic Diversity of the GCRI Team, January 2023, the GCRI Statement on Pluralism in the Field of Global Catastrophic Risk, and the GCRI Statement on Racism.

Academic citation:
Seth D. Baum and Andrea Owe. On the intrinsic value of diversity. Inquiry, forthcoming, DOI 10.1080/0020174X.2024.2367247.

Download PDF PreprintView in Inquiry

Image credit: Seth Baum

Authors

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

AI ethics concepts like value alignment propose something similar to democracy, aggregating individual values into a social choice. This paper, published in the journal AI and Ethics, explores the potential for AI systems to be manipulated in ways analogous to sham elections in authoritarian regimes.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

Manipulating Aggregate Societal Values to Bias AI Social Choice Ethics

AI ethics concepts like value alignment propose something similar to democracy, aggregating individual values into a social choice. This paper, published in the journal AI and Ethics, explores the potential for AI systems to be manipulated in ways analogous to sham elections in authoritarian regimes.