June Newsletter: Nuclear Weapons Ban Treaty

by | 16 June 2017

Dear friends,

This past May, a draft treaty to ban nuclear weapons was released at the United Nations. Nuclear weapons are a major global catastrophic risk, one that GCRI has done extensive work on. At first glance, the nuclear ban treaty would seem like something to wholeheartedly support. However, upon closer inspection, its merits are ambiguous.

The treaty is not expected to eliminate nuclear weapons because the nuclear-armed countries won’t sign it. Instead, it seeks to strengthen the norm against nuclear weapons and increase pressure for disarmament. It also places restrictions on the countries that do sign it. This could become an issue for European NATO countries, to Russia’s advantage, by ending the practice of NATO countries hosting US nuclear weapons without requiring any reciprocal actions from Russia. Indeed, the ban treaty may tip the geopolitical balance away from NATO and its Pacific allies in favor of Russia and China, because the latter have minimal civil society to pressure their governments in support of the ban treaty. There are also concerns that the ban treaty could cause harmful confusion with existing treaties, including the Nuclear Non-Proliferation Treaty [http://thebulletin.org/nuclear-weapons-ban-should-first-do-no-harm-npt10599] and the Comprehensive Test Ban Treaty.

What is unambiguous is that the nuclear ban treaty leadership means well. I saw it firsthand when I spoke at the 2014 Vienna Conference on the Humanitarian Impact of Nuclear Weapons, which was a prelude event to the ban treaty. However, much of what I’ve seen in support of the ban treaty characterizes nuclear weapons as inherently immoral and thus needing to be banned, regardless of any surrounding issues. I disagree with this position. I believe the full set of issues should be taken into account. Taking them all into account, the merits of the ban treaty become rather ambiguous. This doesn’t mean the treaty would be a bad thing, only that it’s hard to tell whether it’s good or bad.

Sincerely,
Seth Baum, Executive Director

Publications

GCRI Executive Director Seth Baum has a new research paper on “Reconciliation Between Factions Focused on Near-Term and Long-Term Artificial Intelligence” forthcoming in AI & Society (a pre-print and non-technical summary of the paper are available here). The paper argues that instead of debating each other, those who favor near-term and those who favor long-term AI can pursue mutually beneficial opportunities. The paper covers three opportunities aimed at improving the societal impacts of AI: 1) changing social norms of AI researchers, 2) technical research on AI safety, and 3) public policy for AI.

GCRI Associate Roman Yampolskiy has a new volume of papers out which he edited along with Victor Callaghan, James Miller, and Stuart Armstrong entitled Technological Singularity: Managing the Journey. The volume includes two chapters by Kaj Sotala and Yampolskiy entitled “Risks of the Journey to the Singularity” and “Response to the Journey to the Singularity” and a paper by GCRI Director of Research Tony Barrett and GCRI Executive Director Seth Baum on “Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process” (a pre-print and non-technical summary of Barrett and Baum’s paper are available here).

Talks, Presentations, and Other Contributions

GCRI Executive Director Seth Baum will participate in a Tech2025 workshop on future AI risk on June 20 in New York City. Baum will also give a talk on “Integrated Assessment of Global Catastrophic Risk and Artificial Intelligence” at the Cambridge University Centre for the Study of Existential Risk on June 28.

GCRI Director of Research Tony Barrett gave a talk on superintelligence risk and policy analysis, and on the governance of technologies associated with catastrophic risks at the 2017 Conference on Governance of Emerging Technologies at Arizona State University.

GCRI Director of Research Tony Barrett will also give a talk on June 17 in Washington DC on nuclear war and AI catastrophe risks, and the value of risk research, at an Effective Altruism DC event on catastrophic risks.

GCRI Executive Director Seth Baum contributed a discussion of nuclear war risk analysis to the Global Challenges Foundation’s 2017 Report on Global Risks.

Help us make the world a safer place! The Global Catastrophic Risk Institute depends on your support to reduce the risk of global catastrophe. You can donate online or contact us for further information.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.