GCR News Summary May 2015

by | 1 June 2015

China has converted some of its long-range missiles to carry multiple warheads. China’s decision to retrofit its missiles with “multiple independently targetable reentry vehicles” (MIRVs) appears to be at least in part a response to the US expanding its missile defense in the Pacific. The US, the UK, France, and Russia already have missiles equipped with MIRVs. The New York Times noted in an editorial that because MIRVs can overwhelm missile defense systems, they increase the incentive for an adversary to launch a first strike against missiles on the ground. The New York Times argued that the Chinese move could spur India and Pakistan to engage in a local arms race. The paper also called for talks between US and China on strategic stability in Asia. Hans M. Kristensen, director of the Nuclear Information Project at the Federation of American Scientists, called it a “a bad day for nuclear restraint” and said that the move “strains the credibility of China’s official assurance that it only wants a minimum nuclear deterrent and is not part of a nuclear arms race”.

The Sunday Times reported that, according to “senior American officials”, Saudi Arabia has decided to acquire “off-the-shelf” nuclear weapons from Pakistan. “There has been a longstanding agreement in place with the Pakistanis, and the House of Saud has now made the strategic decision to move forward,” an anonymous US defense official told the paper. Saudi Arabia has committed under the Nuclear Nonproliferation Treaty not to acquire nuclear weapons. But Saudi Arabia is reportedly concerned that the nuclear deal the P5+1 countries—the five permanent members of the UN Security Council plus Germany—are negotiating will not stop Iran from producing nuclear weapons of its own. Saudi officials wouldn’t comment on the report, but Saudi Ambassador Adel Al-Jubeir told CNN that “the kingdom of Saudi Arabia will take whatever measures are necessary in order to protect its security”. Qazi Khalilullah, a spokesperson for Pakistan’s foreign office, said the report was “utterly unfounded”. “As a responsible nuclear state, Pakistan is fully aware of its responsibilities,” said Khalilullah. “Pakistan’s nuclear program is purely for its own legitimate self-defense and maintenance of a credible minimum deterrence.”

Former CIA counter-proliferation specialist Valerie Plame wrote in The Huffington Post that she believes nuclear weapons are the greatest existential threat to humanity. Plame noted that the nuclear powers are all modernizing or expanding their arsenals. Many are also adopting a “high-alert” posture that increases the risk of inadvertent nuclear conflict. At the same time, terrorists are working to get their hands on nuclear materials. “Whatever other issues people care about—poverty, the environment, inequality, and so many others—if we don’t get this one right, and soon, nothing else will matter,” Plame said.

President Obama said in a speech to the graduating class of the US Coast Guard Academy that climate change is “a serious threat to global security”:

Rising seas are already swallowing low-lying lands, from Bangladesh to Pacific islands, forcing people from their homes. Caribbean islands and Central American coasts are vulnerable, as well. Globally, we could see a rise in climate change refugees. And I guarantee you the Coast Guard will have to respond. Elsewhere, more intense droughts will exacerbate shortages of water and food, increase competition for resources, and create the potential for mass migrations and new tensions.  All of which is why the Pentagon calls climate change a “threat multiplier.”

A study in Nature Climate Change found that sea levels are rising increasingly fast. The new measurements are consistent with the water contributed to the oceans from melting Antarctic and Greenland ice sheets. Christopher Watson, the paper’s lead author, said that sea levels are now rising at twice the average rate of the last century.

The World Health Organization (WHO) asked its members to set up a $100 million contingency fund for public health emergencies. WHO also recommended forming an emergency workforce that could respond rapidly to crises. WHO Director-General Margaret Chan told the organization’s annual assembly that the agency had been “overwhelmed” by the recent Ebola outbreak in West Africa. “I do not ever again want to see this organization faced with a situation it is not prepared, staffed, funded, or administratively set up to manage,” Chan said.

Rosa Brooks argued in Foreign Policy that autonomous weapon systems that select their own targets may be “far better than human beings at complying with international humanitarian law”.

Face it: we humans are fragile and panicky creatures, easily flustered by the fog of war. Our eyes face only one direction; our ears register only certain frequencies; our brains can process only so much information at a time. Loud noises make us jump, and fear floods our bodies with powerful chemicals that can temporarily distort our perceptions and judgment.

As a result, Brooks wrote, humans regularly fail to distinguish between combatants and civilians and misjudge what actions are necessary and proportional in combat. In many domains machines already show better judgment than humans. As long as weapons systems continue to narrowly follow the rules we set for them, Brooks wrote, there may be “a legal and ethical obligation to use ‘killer robots’ in lieu of—well, ‘killer humans’”.

Tesla and SpaceX founder Elon Musk—who has said artificial intelligence (AI) could be our greatest existential risk—is worried that Google will build a “a fleet of artificial-intelligence-enhanced robots capable of destroying mankind”. According to a new authorized biography, Musk, who is close friends with Google CEO Larry Page, did not suggest that Google is intentionally doing anything dangerous. But Musk did say that he fears Google will “produce something evil by accident”. The Economist wrote that while “full” AI may not come soon, “Just as armies need civilian oversight, markets are regulated and bureaucracies must be transparent and accountable, so AI systems must be open to scrutiny. Because systems designers cannot foresee every set of circumstances, there must also be an off-switch.”

This news summary was put together in collaboration with Anthropocene. Thanks to Tony Barrett, Seth Baum, Kaitlin Butler, Trevor White, and Grant Wilson for help compiling the news.

For last month’s news summary, please see GCR News Summary April 2015.

You can help us compile future news posts by putting any GCR news you see in the comment thread of this blog post, or send it via email to Grant Wilson (grant [at] gcrinstitute.org).

Image credit: US Air Force/Lt. Col. Leslie Pratt

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.