GCRI Receives $200,000 for 2021 Work on AI

by | 24 February 2021

I am delighted to announce that GCRI has received a new $200,000 donation to fund work on AI in 2021 from Gordon Irlam. Irlam had previously made donations funding AI project work conducted in 2020 and 2019.

Irlam explains in his own words why he chose to support our work:

“It isn’t enough that we research technical AGI alignment. Any such technical AGI alignment scheme must then be implemented. This is the domain of AGI policy. GCRI is one of the leading U.S. organizations working on AGI policy.”

All of us at GCRI are grateful for this donation. We are excited to continue our work developing measures to ensure that AI is developed and deployed safely.

Our projects for 2021 are mostly concentrated around a unifying theme of policy that could impact future artificial general intelligence (AGI) technology:

Continuation of prior projects: We will continue work on select projects from previous years.

Further support for the AI and global catastrophic risk talent pools: This project extends our successful advising and collaboration programs of 2019 and 2020.

Research on the benefits and harms of raising awareness about AGI: This project will evaluate when and how it would be good to raise awareness of AGI and AGI issues, especially with policy audiences.

Research on current policy measures that could improve future AGI outcomes: This project will study policy measures that could improve the outcomes of future AGI technology and could be implemented now.

Research on near-term steps that could improve future AGI policy: This project will study steps that could be taken in the near-term to lay the groundwork for better AGI policy in the future.

AGI policy outreach: This project will conduct outreach to various policy institutions and promote ideas developed in GCRI’s research projects.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.