June Newsletter: Summer Talks

by | 21 June 2018

Artificial Intelligence

GCRI Associate Roman Yampolskiy gave a talk on AI safety to the Global Challenges Summit 2018 in Astana, Kazakhstan May 17-19.

GCRI Executive Director Seth Baum and GCRI Associate Roman Yampolskiy participated in a workshop on “AI Coordination & Great Powers” hosted by Foresight Institute in San Francisco on June 7.

GCRI Executive Director Seth Baum gave a seminar on “AI Risk, Ethics, Social Science, and Policy” hosted by the University of California, Berkeley Center for Human-Compatible Artificial Intelligence (CHAI) on June 11.

Effective Altruism

GCRI Executive Director Seth Baum gave a talk on “Reconciling Effective Altruism and International Security Perspectives” at Effective Altruism Global San Francisco 2018 on June 9. He also spoke in two “whiteboard sessions”: one titled “Do Existing Institutions Have a Role to Play in AI Strategy?” with Jade Leung and another titled “How Important Is Civilizational Collapse?” with Haydn Belfield and Gregory Lewis.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.