February Newsletter: Outlook for 2017—GCRI & US Politics

by | 7 February 2017

Dear friends,

One year ago, I described GCRI’s success with academic and popular publishing and speaking, and noted that this productivity could not be sustained on the small budget we had. Over the past year, we have focused more on fundraising, though I regret with only limited success. It’s enough to keep our doors open, but not enough to perform at high capacity.

But enough about us. What’s more important is that, over the past year, the world has changed. The United States has elected a wholly unusual president. Shortly after the election, I wrote in the Bulletin of the Atomic Scientists that the new administration is likely to shift global catastrophic risk in several ways—in most cases increasing the risk but in some cases decreasing it. Already, some of these predictions are coming true, leaving the world less safe.

The new president is significant in his own right, but equally significant is the political culture that elected him. Broad portions of the American electorate have become disdainful of the country’s political, cultural, and intellectual establishment. They have elected a person who openly flouts established norms of democracy, civility, and rationality.

As an academic think tank, GCRI is part of this establishment. I worry that, in rejecting establishment expertise, the new administration will inadvertently bring worse results. For example, its restriction on immigration from seven predominantly Muslim countries is intended to make the U.S. safe from violent extremism. However, research shows that immigrants tend to be less violent than native-born Americans and that selective immigration restrictions tend to fuel violent anti-American sentiment in the restricted places. Thus, the new immigration restriction would appear to be counterproductive.

But I am not upset at the Americans who disdain the establishment, and you should not be either. Instead, we should be curious about why they feel this way and how we can bridge the divide. For example, much of the anti-establishment sentiment comes from rural and small town communities that have gotten the short end of economic and cultural changes. Therefore, we should seek policies that improve conditions in their communities while also reducing global catastrophic risk. Such policies include investments in local environmental sustainability and financial system regulations that reduce the risk of economic collapse.

Two years ago, I published a paper called “The Far Future Argument for Confronting Catastrophic Threats to Humanity: Practical Significance and Alternatives” arguing that efforts to reduce global catastrophic risk should factor in the perspectives of those who would be involved in the risk-reducing actions. Today, it is clear that we need to factor in the perspectives of citizens who are not part of our usual academic and intellectual communities. I encourage all of us to reach out to our friends, family members, and other acquaintances in order to build solidarity for sound policies to keep the world safe.

Sincerely,
Seth Baum, Executive Director

Events

GCRI directors Seth Baum and Tony Barrett will participate in the Garrick Institute for the Risk Science’s first Colloquium On Catastrophic And Existential Threats, March 27-29, in Los Angeles.

Food Security

GCRI associate David Denkenberger’s paper, “Feeding Everyone if the Sun is Obscured and Industry is Disabled” has been accepted for publication in the International Journal of Disaster Risk Reduction, doi: 10.1016/j.ijdrr.2016.12.018.

Popular Media

Seth Baum is featured in the Story Collider podcast for the talk he gave last year on the controversy over his research on winter-safe deterrence, a policy proposal for reducing risk of catastrophic nuclear winter.

GCRI associate Roman Yampolskiy was interviewed about artificial intelligence in “AI Has Tremendous Ability to Help in All Domains of Interest”.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.