July Newsletter: Summer Talks and Presentations

by | 7 July 2017

Integrated Assessment

GCRI Executive Director Seth Baum gave a talk on “Integrated Assessment of Global Catastrophic Risk and Artificial Intelligence” at the Cambridge University Centre for the Study of Existential Risk (CSER) on June 28. Dr. Baum will also participate in a Tech2025 workshop on future AI risk on July 11 in New York City.

GCRI Director of Research Tony Barrett gave a talk on integrated assessment, nuclear war, AI, and risk reduction opportunities at an Effective Altruism DC event on global catastrophic risks on June 17.

Artificial Intelligence

GCRI Associate Roman Yampolskiy gave a talk on artificial intelligence at UpstartU. Dr. Yampolskiy was also interviewed by Tech Republic about new machine learning research.

Food Security

GCRI Associate Dave Denkenberger gave a keynote address on “Progress in Feeding the Earth If There is a Global Agricultural Catastrophe” at the 2nd International Conference on Food Security and Sustainability in San Diego on June 26, 2017. Dr. Denkenberger also gave an online presentation on “Preliminary Price and Life-Saving Potential of Alternate Foods for Global Agricultural Catastrophes” to the 22nd World Futures Studies Federation World Conference in Jondal, Norway, June 9, 2017.

Help us make the world a safer place! The Global Catastrophic Risk Institute depends on your support to reduce the risk of global catastrophe. You can donate online or contact us for further information.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.