May Newsletter: Molecular Nanotechnology

by | 22 May 2018

Dear friends,

It has been a productive month for GCRI, with new papers by several of our affiliates. Here, I would like to highlight one by Steven Umbrello and myself, on the topic of molecular nanotechnology, also known as atomically precise manufacturing (APM).

At present, APM exists only in a crude form, such as the work recognized by the 2016 Nobel Prize in Chemistry. However, it may be able to revolutionize manufacturing, making it inexpensive and easy to produce a wide range of goods, resulting in what leading expert Eric Drexler refers to as “radical abundance”. However, the APM revolution could also have major downsides, such as the capacity to make more and powerful weapons.

In our new paper “Evaluating Future Nanotechnology: The Net Societal Impacts of Atomically Precise Manufacturing”, Umbrello and I assess the effects of APM across the full range of important sectors. We find net benefits from increases in material wealth, improved environmental protection, enabling nuclear disarmament, and space travel. We find net harms from rogue actor violence and AI. We also assessed surveillance but found an ambiguous net effect. We tentatively find the overall effect to be net beneficial, with the largest effect for the environment: APM could be the breakthrough technology that solves climate change. However, there is a lot of uncertainty, and the balance could easily go the other way.

Working on this paper, I was struck by how little attention APM is getting, especially compared to other future technologies like AI and geoengineering. I hope that our new paper will help draw attention and inspire more work on this important topic.

Sincerely,
Seth Baum, Executive Director

 

General Catastrophic Risk

GCRI Junior Associate Matthijs Maas co-authored a paper with Hin-Yan Liu and Kristian Cedervall Lauta, “Governing Boring Apocalypses: A New Typology of Existential Vulnerabilities and Exposures for Existential Risk Research” forthcoming in Futures. 

Nanotechnology

GCRI Executive Director Seth Baum and Junior Associate Steven Umbrello have a new paper, “Evaluating Future Nanotechnology: The Net Societal Impacts of Atomically Precise Manufacturing”, forthcoming in Futures.

Extraterrestrial IntelligenceGCRI Associate Jacob Haqq-Misra has paper on the risk of messaging to extraterrestrial intelligence (METI), “Policy Options for the Radio Detectability of Earth”, forthcoming in Futures. Haqq-Misra recently discussed the METI risk problem on the Science Friday radio show with Kelly Smith and Sheri Wells-Jensen.

Artificial Intelligence

GCRI Associate Dave Denkenberger has a paper, “Classification of Global Catastrophic Risks Connected with Artificial Intelligence”, which he co-authored with Alexei Turchin, forthcoming in AI & Society.

Popular Media

GCRI Executive Director Seth Baum and Director of Communications Robert de Neufville were interviewed on The Future of Life Institute Podcast about their recent paper with GCRI Director of Research Tony Barrett on the probability of nuclear war.

GCRI Executive Director Seth Baum wrote an opinion piece in Project Syndicate arguing that we need to take the danger of an AI apocalypse seriously.

GCRI Associate Roman Yampolskiy was interviewed by Simulation about Artificial Intelligence and Security.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.