GCRI Receives $250,000 Donation for AI Research and Outreach

by | 15 December 2018

I am delighted to announce that GCRI has received a $250,000 donation from Gordon Irlam, to be used for GCRI’s research and outreach on artificial intelligence. The donation will be used mainly to fund Robert de Neufville and myself during 2019.

The donation is a major first step toward GCRI’s goal of raising $1.5 million to enable the organization to start scaling up. Our next fundraising priority is to bring GCRI Director of Research Tony Barrett on full-time, and possibly also one other senior hire whom I can only discuss privately.

In regards to his donation, Irlam states:

“GCRI does solid and important work on vitally important topics and is one of the only US organizations working on these issues. They have done this work in the past on a very small budget. Advanced AI will have a profound effect on society. It is important that this effect be beneficial. My giving to GCRI is in the hope that they can scale up their research, and scale up their research outreach, so that societal and corporate policies and responses to artificial general intelligence are shaped appropriately.”

All of us at GCRI are grateful for this donation and excited for the work it will enable us to do.

Here is a summary of the specific research and outreach projects funded by this donation:

Corporate governance of AI: Following GCRI’s recent publications on AI skepticism and misinformation, this project seeks to improve how the for-profit sector handles AI risks. It will begin with outreach to people at AI companies and may include further research on strategies for improving corporate governance of AI.

National security dimensions of AI: This project conducts research and outreach on the risks associated with national security and military involvement in AI. The project builds on GCRI’s recent success in outreach to the US national security community on AI, as well as our backgrounds in AI and national security.

Anthropocentrism in AI ethics: This project evaluates the extent to which AI ethics favors humans, develops proposals for how AI ethics should handle questions of human favoritism, and conducts outreach to improve the state of AI ethics conversations. The project extends recent GCRI research on social choice ethics in AI.

Prospects for collective action on AI: This project assesses how to promote positive interactions between different AI groups and avoid dangerous forms of competition, such as races in which groups cut corners on safety in order to build AI first. The project applies GCRI’s expertise on social science topics such as the governance of common-pool resources.

Governance of AI and global catastrophic risk: This project draws on prior scholarship and experience on risk governance to develop general insights and strategies for the governance of global catastrophic risk, with emphasis on AI risk.

Support for the AI and global catastrophic risk talent pools: Finally, this project involves GCRI identifying, training, and mentoring people who are trying to be more active in AI and global catastrophic risk. The project will support GCRI’s efforts to scale up and will also support the wider AI and global catastrophic risk community.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.