July Newsletter: Artificial Interdisciplinarity

by | 31 July 2020

Dear friends, 

A major impediment to addressing global catastrophic risk is the cognitive challenge posed by the complex, interdisciplinary nature of the risks. Identifying practical, effective solutions for reducing the risk requires a command of a wide range of subjects. That is not easy for anyone, including those of us who work on it full time. 

This month, we announce a new paper on the use of artificial intelligence to ease the cognitive burdens of interdisciplinary research and better address complex societal problems like global catastrophic risk. The paper covers AI systems already in use, such as the TERRA project led by our colleagues at CSER, as well as potential future systems that could be developed over the medium and long term. (See also our recent paper Medium-term artificial intelligence and society.) Creating effective AI systems for supporting interdisciplinary research is itself no easy task, but it is a worthy one for those seeking to develop AI for the social good. 

The paper is Artificial interdisciplinarity: Artificial Intelligence for research on complex societal problems, published in Philosophy & Technology

Sincerely,

Seth Baum, Executive Director

Outreach

GCRI recently completed the first round of its 2020 advising and collaboration program for select AI projects, which was made possible through the generous support of Gordon Irlam. A total of 60 people responded to our initial call for advisees and collaborators, and we held 41 one-on-one calls with people interested in working in the field of global catastrophic risk or in collaborating on our AI projects. Program participants in the program came from at least 17 different countries and had a wide range of interests and professional backgrounds. The program allowed us to connect with a lot of talented people, some of whom made substantial contributions to our ongoing AI projects. We anticipate conducting a second round of the program later in 2020.

Fundraising

GCRI is pleased to announce that we recently received $140,000 in new grants through the Survival and Flourishing Fund. $90,000 is from Jaan Tallinn and $50,000 is from Jed McCaleb. Both grants are for general support of GCRI. We are grateful for these donations and look forward to using them to advance our mission of developing the best ways to confront humanity’s gravest threats.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.