Eric Talbot Jensen to Deliver Online Lecture on Law for Emerging Weapons Technologies 19 June

by | 3 June 2013

This is the pre-event announcement for an online lecture by Eric Talbot Jensen, Associate Professor of Law at Brigham Young University.

Here is the full talk info:

The Future of the Law of Armed Conflict: Ostriches, Butterflies, and Nanobots
Based on a paper of the same title.
Wednesday 19 June, 18:00 GMT (11:00 Los Angeles, 12:00 Utah, 14:00 New York, 19:00 London).
To be held online via Skype. RSVP required by email to Seth Baum (seth [at] gcrinstitute.org). Space is limited.

Abstract: The law has consistently lagged behind technological developments. This is particularly true in armed conflict, where the 1907 Hague Conventions and the 1949 Geneva Conventions form the basis for regulating emerging technologies in the 21st century. However, the law of armed conflict, or LOAC, serves an important signaling function to states about the development of new weapons. As advancing technology opens the possibility of not only new developments in weapons, but also new genres of weapons, nations will look to the LOAC for guidance on how to manage these new technological advances. Because many of these technologies are in the very early stages of development or conception, the international community is at a point in time where we can see into the future of armed conflict and discern some obvious points where future technologies and developments are going to stress the current LOAC. While the current LOAC will be sufficient to regulate the majority of future conflicts, we must respond to these discernible issues by anticipating how to evolve the LOAC in an effort to bring these future weapons under control of the law, rather than have them used with devastating effect before the lagging law can react. This online lecture analyzes potential future advances in weapons and tactics and highlights the LOAC principles that will struggle to apply as currently understood. The online lecture will then suggest potential evolutions of the LOAC to ensure it continuing efficacy in future armed conflicts.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.