Eric Talbot Jensen Gives Online Lecture on Future Weapons and the Law of Armed Conflict

by | 25 June 2013

On Wednesday 19 June, GCRI hosted an online lecture by Eric Talbot Jensen entitled ‘The Future of the Law of Armed Conflict: Ostriches, Butterflies, and Nanobots’. Jensen is an Associate Professor at the Brigham Young University School of Law and formerly spent two decades in the US Army as a Cavalry Officer and a Judge Advocate, including a position as Chief of the Army’s International Law Branch. His lecture is based on a forthcoming paper he wrote of the same name [1].

The law of armed conflict (LOAC) regulates conduct during armed hostilities—everything from protections for citizens and prisoners of war to what weapons combatants can use. The LOAC is derived from treaties, like the Geneva Conventions, as well as customary international law, meaning laws that are binding regardless of treaty obligations.

Jensen’s lecture described how the LOAC is ill prepared for future weapons. One example participants discussed concerns the principle of proportionality, which establishes that damage to civilians and their property must be at least proportional to the military advantage gained by an attack. While some future weapons might be incredibly precise—imagine a deadly bioengineered virus delivered to a particular target via nanobots—these weapons also carry the risk of causing a global catastrophe, whether by misuse or unintended consequences. The principle of proportionality under the LOAC does not account for this kind of precision warfare that has low probability, high magnitude risks.

Another example concerns how the LOAC defines “attacks,” which generally means actions that cause “heat, blast, and fragmentation,” as Jensen puts it (think guns and bombs). But future weapons like cyber attacks, nanotechnologies, and genomics can achieve the same military goals as physical attacks without having to pull triggers or push launch buttons. Take the Stuxnet and Flame computer viruses. Does hacking into a state run computer system to shut down centrifuges constitute an “attack” under the LOAC?

Perhaps the biggest disruptor to the LOAC is the rise of the non-state actor. The LOAC was built on the concept of state conflicts, meaning wars between countries. But groups engaging in warfare are increasingly bound by ideology rather than nationality. Likewise, some rights and duties of war only apply to groups that have, for example, a fixed, recognizable emblem and that carry arms openly. Unless casual wear and laptops count, the LOAC is going to have a hard time applying these criteria to hacktivist groups like Anonymous.

The LOAC can be updated, but not easily. Traditionally, changes to the LOAC are reactionary at best. It took the devastation of World War II before states agreed to protect citizens during wartime under the Fourth Geneva Convention. And with future weapons having the potential to cause widespread death and even a global catastrophe, creating legal norms to regulate future weapons should be done sooner than later.

Participants discussed a number of ways to reduce the risks from future weapons in an outdated legal system. Of course, updating the LOAC is one piece of the puzzle, but this will be difficult in the current political environment, said Jensen. Another approach is for international bodies to interpret how future weapons could fall under the LOAC. A guide on cyber warfare under the LOAC released by the NATO Cooperative Cyber Defence Centre of Excellence is a step in the right direction [2], and the International Committee of the Red Cross, the Geneva-based humanitarian group, considers future weapons under the LOAC, as well. Finally, as online lecture participant James Barrat suggested, professionals could gather as a unit to hash out ways to decease risks from future weapons, much like they addressed risks from recombinant DNA techniques at the 1975 Asilomar Conference on Recombinant DNA. If you have any specific ideas, please contract the Global Catastrophic Risk Institute.

Here is the full abstract of the talk:

The law has consistently lagged behind technological developments. This is particularly true in armed conflict, where the 1907 Hague Conventions and the 1949 Geneva Conventions form the basis for regulating emerging technologies in the 21st century. However, the law of armed conflict, or LOAC, serves an important signaling function to states about the development of new weapons. As advancing technology opens the possibility of not only new developments in weapons, but also new genres of weapons, nations will look to the LOAC for guidance on how to manage these new technological advances. Because many of these technologies are in the very early stages of development or conception, the international community is at a point in time where we can see into the future of armed conflict and discern some obvious points where future technologies and developments are going to stress the current LOAC. While the current LOAC will be sufficient to regulate the majority of future conflicts, we must respond to these discernible issues by anticipating how to evolve the LOAC in an effort to bring these future weapons under control of the law, rather than have them used with devastating effect before the lagging law can react. This online lecture analyzes potential future advances in weapons and tactics and highlights the LOAC principles that will struggle to apply as currently understood. The online lecture will then suggest potential evolutions of the LOAC to ensure it continuing efficacy in future armed conflicts.

The presentation was hosted online via Skype, with slides shown on Prezi. The audience included James Barrat, a documentary filmmaker and author of the forthcoming Artificial Intelligence book ‘Our Final Invention’; Michael Burnam-Fink, a Ph.D. student in the Human and Social Dimensions of Science and Technology program at Arizona State University; Sarah Jornsay-Silverberg, an international environmental and human rights lawyer who has written on the law of armed conflict and targeted killings; Kirsten Rodine-Hardy, Assistant Professor of Political Science at Northeastern University; Megan Worman, a Cyberlaw and Policy Consultant; and GCRI’s Tony Barrett, Seth Baum, Kaitlin Butler, Mark Fusco, and Grant Wilson.

[1] Jensen, E (forthcoming 2014). “The future of the law of armed conflict: ostriches, butterflies, and nanobots” Michigan Journal of International Law, 35(2).  http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2237509

[2] “The Tallinn Manual on the International Law Applicable to Cyber Warfare” NATO Cooperative Cyber Defence Centre of Excellence. http://www.ccdcoe.org/249.html

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.