Meet the Team Tuesdays: Tony Barrett

by | 20 November 2012

This post is part of a weekly series introducing GCRI’s members.

Tony and I met at the 2010 Society for Risk Analysis Annual Meeting. I was co-chair with Vanessa Schweizer of two sessions on global catastrophic risk. Vanessa and Tony had attended Carnegie Mellon’s Engineering & Public Policy PhD program together. Tony and I both wanted to focus our careers on GCR, and started thinking about the most effective ways of doing this. We benchmarked many other organizations and concluded that there was a need for something new. While many organizations work on some aspect of GCR, there wasn’t anything to look at the full breadth of GCR topics and put all the pieces together. So, we founded GCRI to do this. Tony is now GCRI’s Director of Research and is also working at RAND’s Arlington, VA office as a Stanton Nuclear Security Fellow. – Seth

Seth Baum: Tony, you and I have been working on GCR for almost two years now. What led you to take such an interest in GCR?

Tony Barrett: I think I’m like a lot of people in that I’ve aimed my work to have maximum positive impact within the context of my resources and opportunities, and I’ve also long been interested in both technical and policy issues for a variety of big risks facing society. Over the years, that had guided my career toward addressing energy and environmental issues, then toward analyzing and reducing risks of terrorist attack with weapons of mass effect. However, I have always been reading and thinking about a wider set of risks. Global catastrophic risks (GCRs) are potentially the most serious. As I saw the work others were doing on GCRs, it seemed to me there would be value in doing what I could to help apply risk analysis and decision analysis methods to help characterize the risks, and the tradeoffs of potential risk-reduction measures.

Seth Baum: You’ve meanwhile been living and working in Washington, D.C. What’s that like? What does D.C. bring to the world of GCR?

Tony Barrett: D.C. is a great place for anything related to public policy, because of the very high density of people working with the U.S. government, nongovernmental organizations, and think tanks, at both national and international levels. There are frequent and convenient opportunities to learn about important developments and to have candid exchanges with others working on the issues.

Seth Baum: You’re now on a fellowship at RAND. What is RAND, and what is your fellowship?

Tony Barrett: RAND was one of the first think tanks, and is still one of the most respected nonprofit, nonpartisan think tanks. It conducts studies on public policy issues in a wide range of areas, often but not exclusively for the U.S. government. RAND aims to provide objective analysis, including on politicized issues.

I am currently on a one-year nuclear security fellowship at RAND, funded by the Stanton Foundation. It’s a great opportunity to spend a year focused on a particular problem related to nuclear security, with guidance from leading experts, as well as to be exposed to the ways RAND approaches issues by being a project team member on regular RAND research.

Seth Baum: You recently presented your work at a conference at the Center for Strategic and International Studies, a leading D.C. think tank. How did that go? What was the conference all about?

Tony Barrett: It was part of a regular, ongoing conference series by the CSIS Program on Nuclear Issues, or PONI. It is an excellent way for early-career people to engage with senior people working on nuclear security issues and to get exposure and guidance on their own work. Certainly that has been my experience with PONI. At this conference, I gave a brief presentation on some (GCRI-supported) work I have done on assessing risks of inadvertent nuclear war. A number of quite knowledgeable people gave me helpful feedback and suggestions for related future work.

Seth Baum: Tell me more about your research on nuclear war. First, what is inadvertent nuclear war? Is this something that society should be worried about?

Tony Barrett: Many types of potential pathways to nuclear war have been identified and discussed in the literature. Broadly speaking, much nuclear strategizing has been primarily concerned with deterring what we might call intentional nuclear war scenarios, i.e. where one nuclear-armed nation intentionally launches an attack on another. However, efforts to produce credible deterrents, i.e. to have capabilities to launch a devastating counterstrike in response to a first strike, have produced some risk of accidental or inadvertent nuclear war. Of course these are not new issues; experts and decision makers have been worrying about them from the beginning of the nuclear age. And most people that study the issues seem to feel that at least for the nations with the longest-standing and largest nuclear arsenals, the probabilities of both intentional and unintentional nuclear war pathways are generally much lower than they were at various points during the Cold War. However, the probabilities are still not zero, and of course the consequences would be horrendous.

Seth Baum: What is your nuclear war research specifically about, and how does it contribute to an understanding of nuclear war?

Tony Barrett: I apply methods of quantitative or probabilistic risk analysis to assess risks of nuclear war scenarios, especially inadvertent nuclear war scenarios. I try to answer some of the standard questions of any risk analysis or decision analysis: What could go wrong? How likely is it? What can we do about it, and what are the tradeoffs? A lot of work has considered these questions at a qualitative level, at least for the situations as they stood at the time and looking into the future at that point. I have been trying to build on and update that work, as well as to apply quantitative methods where possible to characterize the size of the relative risks involved and the potential for risk reduction. That could help in prioritizing risk-reduction efforts, both now and into the future.

Seth Baum: Finally, as GCRI’s Director of Research, you’ve played a leading role crafting GCRI’s research interests and priorities. What research topics do you hope to see more attention to? What would you recommend for up-and-coming GCR researchers?

Tony Barrett: One thing I aim for GCRI to do is to help characterize the relative risks, and risk reduction opportunities, in various GCR areas, which could indicate areas where further GCR research of particular types could yield greatest benefits. However, that work is either ongoing or ahead of us, so please either watch this space or get involved with GCRI to help tackle those questions. Otherwise for now I’ll just echo career advice I’ve heard from others: Don’t over-plan your career, because it’s hard to predict what opportunities will come your way. In other words, focus first on doing a good job and having maximal positive impact where you can. There are a lot of people doing important work that helps build society’s resilience to catastrophes, or that can give insights about what kinds of catastrophes could occur in the future as technologies and societies change. Not everybody doing this work needs to think about their work in terms of GCR per se, but there is great potential value in having some people make connections across domains. We need to both see the big picture of risks, and to have engagement in a wide range of fields at an appropriate level of detail. So I think there’s great value both in having GCR generalists that apply a set of skills across a number of GCR areas, as well as having specialists in each GCR area that engage with the GCR generalists. We aim for GCRI to be a place for all of those people to come together.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.