Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems

by | 28 July 2020

Download Preprint PDF

One major challenge in making progress on global catastrophic risk is its interdisciplinarity. Understanding how best to address the risk requires input from risk analysis, public policy, social science, ethics, and a variety of other fields pertaining to specific risks, such as astronomy for asteroid risk and computer science for artificial intelligence (AI) risk. Working across all these disparate fields is a very difficult challenge for human minds. This paper explores the use of AI to help with the cognitive challenge of interdisciplinary research so as to advance progress on global catastrophic risk and other complex societal problems. It coins the term “artificial interdisciplinarity” to refer to AI systems that help with interdisciplinary research.

While all areas of research can be cognitively difficult, interdisciplinary research poses several distinct challenges. First, it is often difficult to bridge divides between different academic disciplines due to their differences in terminology, paradigms or ways of thinking, and views on what makes for good research. Second, the quantity of literature of relevance to complex interdisciplinary topics can be overwhelmingly large, much too much for any one researcher to master. Third, it is difficult to conduct peer review of interdisciplinary research manuscripts and funding proposals because reviewers often lack expertise across all the disparate disciplines included in the research. Finally, insights from the study of one interdisciplinary topic are not readily transferred to the study of other, similar interdisciplinary topics because of the psychological “distance” between the topics.

Current AI systems already help with some of these challenges. Search engines such as Google Scholar and Semantic Scholar help identify relevant literature and expert reviewers across disciplines. Ditto for recommendation engines, such as the project http://x-risk.net, which uses a custom artificial neural network to produce recommendations of literature on catastrophic risk. Machine learning tools are also being used for “automated content analysis” to map the literature on specific topics. All of these tools facilitate interdisciplinary research, but they are limited by the fundamental limitations of current AI techniques, in particular the inability of machine learning to handle causal relationships, hierarchies, and open-ended environments.

Future “artificial interdisciplinarity” systems could add more value if they can improve at certain key tasks, including the interpretation of texts, the translation of language and ideas from one discipline to another, and the transfer of insight from one topic to another. Each of these is an active area of AI research. For example, the publisher Elsevier has sponsored the project ScienceIE to work on interpretation. The field of AI has major lines of work dedicated to translation across human languages and to transfer learning. Progress on these fronts may require breakthroughs beyond current AI paradigms, but it would be of high value to understanding and addressing global catastrophic risk and other interdisciplinary societal problems.

Over the long-term, it is not hard to imagine some future AI that can accomplish all the cognitive tasks of interdisciplinary research. An advanced artificial general intelligence (AGI) may be able to think at least as well as humans across the full range of cognitive tasks. Such an AI would presumably also be very capable of doing interdisciplinary research. Indeed, some current projects seeking to build AGI are motivated by the cognitive difficulty of interdisciplinary research for human minds. On the other hand, advanced AGI may not be available any time soon and may itself pose major risks, even if it is designed as an “oracle” that can only answer questions that humans pose to it.

This paper builds on several prior lines of GCRI research. All of our research is interdisciplinary, providing us with experience in the cognitive challenges addressed in the paper. We specialize in the transfer of insights across issues, such as in our papers Lessons for artificial intelligence from other global risks and On the promotion of safe and socially beneficial artificial intelligence. Our prior work also cuts across near-term, medium-term, and long-term AI, such as our papers Medium-term artificial intelligence and society and Reconciliation between factions focused on near-term and long-term artificial intelligence. Finally, our knowledge of the motivations of current AGI projects derives from our paper A survey of artificial general intelligence projects for ethics, risk, and policy.

Academic citation:
Baum, Seth D., 2021. Artificial interdisciplinarity: Artificial intelligence for research on complex societal problems. Philosophy & Technology, vol. 34, no. S1 (November), pages 45-63, DOI 10.1007/s13347-020-00416-5.

Download Preprint PDFView in Philosophy & TechnologyView in ReadCube

Stockholm Stadsbiblioteket photo credit: Gunnar Ridderström

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.