May 2013 Newsletter

by | 13 May 2013

Dear friends,

If there’s one thing we know from the study of risk, it’s that sudden, unexpected events can change the game. So it was for the 2010 Deepwater Horizon oil spill, the subject of a new paper by GCRI Deputy Director Grant Wilson. So too it is right now for GCRI and our parent organization Blue Marble Space, which recently lost its 501c3 tax status. For Deepwater Horizon, the result was several deaths plus massive environmental and economic damage. For GCRI, it means we have to adjust our fundraising strategy, though fortunately the rest of our operations can continue as normal. In both cases, there are lessons to be learned that can help with future work.

That’s one thing that distinguishes global catastrophes from all other events. For a large enough global catastrophe, civilization may not recover. There could be no opportunity to learn lessons and get it right the next time. There might not be a next time. Because of this, it’s crucial that we get it right on global catastrophes the first time, every time. There may be no second chance.

As always, thank you for your interest in our work. We welcome any comments, questions, and criticisms you may have.

Sincerely,
Seth Baum, Executive Director

May 7: Online Lecture On Geoengineering & Intellectual Property

On Tuesday, May 7, in an online lecture, Aladdin Diakun of the Balsillie School of International Affairs will present his recent work on intellectual property issues raised by geoengineering. The talk will be at noon New York time (16:00 GMT). To join this lecture, please RSVP to Seth Baum (seth@gcrinstitute.org).

Deadline May 16: Join GCRI At The Society For Risk Analysis 2013 Annual Meeting

There is still time to join the global catastrophic risk sessions that GCRI is organizing for the SRA 2013 Annual Meeting (8-11 December, Baltimore). Please send a short (1-5 sentence) speaker bio and presentation description to Seth Baum (seth@gcrinstitute.org) by May 16 at the latest. For information about previous GCRI SRA sessions, please click here.

New Paper On International Law & Deepwater Horizon Oil Spill

This is not the usual GCRI paper announcement. First, it’s on a local catastrophe, with a technology that, while in some ways harmful, appears to not pose any risk of global catastrophe. Second, we’re not announcing what journal it’s being published in. For details on this unusual paper announcement, plus a discussion of how this relates (and runs counter) to broader academic ‘publication bias’ trends, please see the discussion on the GCRI blog.

Wilson, Grant S. Deepwater Horizon and the Law of the Sea: Was the cure worse than the disease? Journal TBD, forthcoming.

April GCR News Summary

Robert de Neufville presents our second monthly news summary. This month covers the H7N9 flu outbreak; a separate outbreak of coronavirus hCoV-EMC; the nuclear standoff with North Korea; Iran’s nuclear program; new legal scholarship on weapons made from emerging technologies; recent developments in international climate change negotiations; a recent talk by the Future of Humanity Institute’s Nick Bostrom; and more.

For the full summary, please see GCR News Summary April 2013.

We invite you to help us compile news items. If you know of something that may be worth including in the next news summary, please post it in the comment thread of the current summary, or send it via email to Grant Wilson (grant@gcrinstitute.org).

Blue Marble Space 501c3 Revoked

Finally, an unfortunate announcement. GCRI’s fiscal sponsor, Blue Marble Space, has had its 501c3 status revoked by the IRS. We emphasize that it was due to an honest mistake, not due to any foul play. All of GCRI’s (and BMS’s) funds are intact. In practical terms, all this means is that GCRI/BMS cannot guarantee tax deductibility for any donations made until 501c3 status is restored. Donations can still be made now, and they may be retroactively tax deductible. This is a setback for GCRI’s fundraising, but all other GCRI operations will continue as normal. For further information, please see this blog post. If you have any questions about this, please contact Seth Baum (seth@gcrinstitute.org).

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.