November Newsletter: Media Engagement Intern Program

by | 8 November 2016

Dear friends,

I am delighted to announce GCRI’s new media engagement internship program. We have selected four people from a highly competitive pool of applicants. Each of the interns is a talented student or young professional with a promising career in global catastrophic risk ahead. They are Marilyn Cotrich, an undergraduate at Arizona State; Jenny Mith, a community manager at IVY; Adam Scholl, a media entrepreneur and independent analyst; and Lena Wang, an undergraduate at the University of Sydney currently on exchange at UCLA. They are working with GCRI to improve media coverage of global catastrophic risk, including coverage of GCRI. Global catastrophic risk can be a difficult topic to cover due to its technical and interdisciplinary nature, but GCRI is committed to supporting a robust public conversation.

As always, thank you for your interest in our work. We welcome any comments, questions, and criticisms you may have.

Sincerely,
Seth Baum, Executive Director

GCR News Summary

Our news summaries cover events across the breadth of GCR topics, including nuclear disarmament, climate change, artificial intelligence, and infectious diseases. Matthijs Maas wrote our summary for the months of August and September.

Artificial Intelligence

GCRI Executive Director Seth Baum published two popular media articles on artificial intelligence: “Tackling Near and Far AI Threats at Once” in the Bulletin of the Atomic Scientists and “Should We Let Uploaded Brains Take Over the World?” on the Scientific American Blog Network.

GCRI Associate Roman Yampolskiy gave two talks on artificial intelligence: “The 4th Industrial Revolution and the Future of Society” at the 2016 International Judicial Symposium and “Artificial Intelligence: Cyber Security Threat or Opportunity?” at the Infosecurity Magazine North America Fall Virtual Conference.

Popular Media

Seth Baum was interviewed on NonProphets about the challenge of forecasting global catastrophes. NonProphets is a podcast on forecasting hosted by GCRI Director of Communications Robert de Neufville along with Atief Heermance and Scott Eastman.

Seth Baum participated in a Kickass News’ podcast on artificial intelligence with Future of Humanity Institute Director Nick Bostrom and the creator of AMC’s drama series Humans.

Seth Baum was quoted in an Associated Press article on risk and the US presidential candidates, which was reprinted in the New York Times, the Washington Post, and many other news outlets.

Upcoming Events

GCRI Associate Roman Yampolskiy will speak at the Envision Conference, December 2-4 at Princeton University.

GCRI Executive Director Seth Baum will speak at YHouse YCafé Consciousness Club on November 9 and at Nerd Nite NYC on November 19, both in New York City.

GCRI Director of Research Tony Barrett will host and speak at a symposium on “Current and Future Global Catastrophic Risks” on December 14 as part of the Society for Risk Analysis Annual Meeting.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.