November Newsletter: Survey of AI Projects

by | 16 November 2017

Dear friends,

This month we are announcing a new paper, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. This is more than the usual research paper: it’s 99 pages pulling together several months of careful work. It documents and analyzes what’s going on right now in artificial general intelligence (AGI) R&D in terms that are useful for risk management, policy, and related purposes. Essentially, this is what we need to know about AGI R&D to make a difference on the issue.

AGI is AI that can reason across a wide range of domains. It’s also the type of AI that could most readily cause a global catastrophe, due to its potential to outsmart humanity and gain control of the planet. There’s a lot of research about AGI risk and related topics in ethics and policy, including work by us at GCRI. However, the work has been largely disconnected from the actual state of affairs in AGI R&D. This new survey changes that. We think it will be a valuable resource for AGI risk analysis and risk management.

You can download the paper here.

Sincerely,
Seth Baum, Executive Director

General Risk

GCRI Associate Jacob Haqq-Misra is guest editing a special issue of Futures on the detectability of future Earths and terraformed worlds. This special issue is looking for papers that consider the future evolution of the Earth system from an astrobiological perspective as well as how humanity or other technological civilizations could artificially create sustainable ecosystems on lifeless planets. The deadline for submissions is November 30.

Artificial Intelligence

GCRI Associate Roman Yampolskiy recently gave several talks on AI safety and AI security: a talk titled “Taxonomy of Pathways to Dangerous Artificial Intelligence” at the Tech, Security and Democracy colloquium at Laval University on October 5; a talk on AI safety and security for the Society for Information Management at Bellarmine University on October 10; and a talk on AI safety as part of the AI With the Best (#AIWTB) online conference on October 14.

Food Security

GCRI Associate Dave Denkenberger wrote in the Effective Altruism Forum about research he conducted as part of a Centre for Effective Altruism grant showed that investing in alternate foods may be as important as investing in AI safety.

Popular Media

GCRI Executive Director Seth Baum is featured in a Tech2025 podcast on “Evil AI, Killer Robots, Dragon Kings and Cupcakes”.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.