February Newsletter: Military AI – The View from DC

by | 21 February 2018

Dear friends,

This past month, GCRI has participated in two exclusive, invitation-only events in Washington, DC, discussing military and international security applications of artificial intelligence. First, GCRI Director of Research Tony Barrett attended a workshop hosted by the AI group at the think tank Center for a New American Security. Then I gave a talk on AI at a workshop on strategic stability hosted by the Federation of American Scientists.

These two events show that the DC international security community is quite interested in AI and its implications for international security, and that it sees GCRI as an important contributor to this topic. Tony Barrett and I both have close ties to this community, and it recognizes us as experts in both AI and international security. For our part, we are happy to contribute to this conversation, in order to observe what’s happening and try to steer the conversation in directions that reduce AI risk.

One overarching theme is that the conversation is focused mainly on current and near-term narrow AI systems. They are aware (to varying degrees) of the idea of future artificial general intelligence and its transformative potential, but see it as being too early-stage of a technology to merit their immediate attention. I think this is fine. There isn’t a clear role for them to play on AGI at this time, and meanwhile there are plenty of important near-term issues that demand their attention, including near-term AI. What’s most important is that there is a good ongoing conversation about AI and international security, with attention to risk and safety issues, so that the near-term and long-term AI issues can both be addressed. For more on this theme, please see my paper “Reconciliation between Factions Focused on Near-Term and Long-Term Artificial Intelligence”.

Sincerely,
Seth Baum, Executive Director

Artificial Intelligence

GCRI Executive Director Seth Baum spoke about the impact of AI on strategic stability at an invitation-only workshop hosted by the Federation of American Scientists on “Dangerous or Disruptive Technologies to Strategic Nuclear Stability” on February 15. “Strategic stability” refers to conditions in which countries tend to avoid arms races and de-escalate crises instead of moving towards war. Baum argued that AI has a variety of impacts on strategic stability, including on all three legs of the nuclear triad: bomber planes, land-based missiles, and submarines.

GCRI Director of Research Tony Barrett attended an invitation-only event on “Autonomous systems and Nuclear Stability” at the Center for a New American Security, an influential think tank in Washington, DC. The event included senior officials in the US defense sector and discussed issues like the prospects for arms races in military AI and the implications for nuclear weapons risks.

GCRI Junior Associate Steven Umbrello has a chapter he co-authored with Angelo Frank De Bellis titled “A Value-Sensitive Design Approach to Intelligent Agents” in Roman Yampolskiy’s forthcoming edited collection of essays, Artificial Intelligence Safety and Security (CRC Press, 2018).

Teaching

GCRI Associate Gary Ackerman is leading three teams of students at SUNY Albany on senior capstone projects for GCRI. The projects will assess government activities on global catastrophic risk and related topics. Ackerman recently joined the Albany faculty after serving as a Director at the National Consortium for the Study of Terrorism and Responses to Terrorism (START) at the University of Maryland.

Media Coverage

GCRI Associate Roman Yampolskiy was interviewed by Fifth Domain for an article about the prospects of an AI arms race in cybersecurity. He told Fifth Domain that AI used to provide cybersecurity could be vulnerable to hacking by stronger AI.

Help us make the world a safer place! The Global Catastrophic Risk Institute depends on your support to reduce the risk of global catastrophe. You can donate online or contact us for further information.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.