What Trump Means for Global Catastrophic Risk

by | 19 December 2016

View in the Bulletin of Atomic Scientists

This article discusses how President Trump’s administration may impact certain policies and ideas pertaining to global catastrophic risk. The article states that the fact that Trump will have the authority to launch nuclear weapons should particularly concern us, given his tendency to behave erratically, Trump’s election also has implications for the prospect of conflict with Russia and China, the stability of the global world order, the survival of democracy in the US, and our ability to avoid a global climate change catastrophe. “Just because we have avoided global catastrophe so far,” the article states, “doesn’t mean we will continue to do so.”

The article was covered in Quartz and Elite Daily.

The article begins as follows:

In 1987, Donald Trump said he had an aggressive plan for the United States to partner with the Soviet Union on nuclear non-proliferation. He was motivated by, among other things, an encounter with Libyan dictator Muammar Qaddafi’s former pilot, who convinced him that at least some world leaders are too unstable to ever be trusted with nuclear weapons. Now, 30 years later, Trump—following a presidential campaign marked by impulsive, combative behavior—seems poised to become one of those unstable world leaders.

Global catastrophic risks are those that threaten the survival of human civilization. Of all the implications a Trump presidency has for global catastrophic risk—and there are many—the prospect of him ordering the launch of the massive US nuclear arsenal is by far the most worrisome. In the United States, the president has sole authority to launch atomic weapons. As Bruce Blair recently argued in Politico, Trump’s tendency toward erratic behavior, combined with a mix of difficult geopolitical challenges ahead, mean the probability of a nuclear launch order will be unusually high.

If Trump orders an unwarranted launch, then the only thing that could stop it would be disobedience by launch personnel—though even this might not suffice, since the president could simply replace them. Such disobedience has precedent, most notably in Vasili Arkhipov, the Soviet submarine officer who refused to authorize a nuclear launch during the Cuban Missile Crisis; Stanislav Petrov, the Soviet officer who refused to relay a warning (which turned out to be a false alarm) of incoming US missiles; and James Schlesinger, the US defense secretary under President Richard Nixon, who reportedly told Pentagon aides to check with him first if Nixon began talking about launching nuclear weapons. Both Arkhipov and Petrov are now celebrated as heroes for saving the world. Perhaps Schlesinger should be too, though his story has been questioned. US personnel involved in nuclear weapons operations should take note of these tales and reflect on how they might act in a nuclear crisis.

The remainder of the article is available in the Bulletin of Atomic Scientists.

Image credit: unknown


This blog post was published on 28 July 2020 as part of a website overhaul and backdated to reflect the time of the publication of the work referenced here.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.