DHS Emerging Technologies Project

by | 29 January 2014

I am writing to announce a new GCRI project on risks from emerging technologies. The project is sponsored by the United States Department of Homeland Security under its Science & Technology Directorate Centers of Excellence, through the Center for Risk and Economic Analysis of Terrorism Events (CREATE), which is based at the University of Southern California.

The project’s Principal Investigator is GCRI Director of Research Tony Barrett. Contributions are also coming from our colleague Jun Zhuang, Assistant Professor of Industrial and Systems Engineering at SUNY-Buffalo, and myself.

Here is the project title and abstract:

Analyzing Current and Future Catastrophic Risks from Emerging-Threat Technologies

This project will develop a methodology for analyzing risks and risk-management tradeoffs of potential emerging threats by systematically identifying potential catastrophe-enabling developments and indicators of precursor events; estimating probabilities of facing precursors; assessing tradeoffs of available options; and continual monitoring for potential indicators of catastrophe precursors and updating probability estimates with new information and new judgments.

More detail on the project can be found on the CREATE website here.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.