GCRI Affiliates Overhaul

by | 6 November 2018

GCRI has made several major changes to our roster of affiliates, as reflected on our People page. These changes make our listing of affiliates more consistent with how GCRI is actually operating at this time and prepares us for future directions we hope to pursue.

First, the GCRI leadership team now consists only of Tony Barrett (Director of Research), Robert de Neufville (Director of Communications), and myself (Executive Director). Grant Wilson (Deputy Director) has been removed. Grant has made excellent contributions since the early days of GCRI but more recently has been less active.

Second, we have removed all of our Associates and Junior Associates. Our listing of Associates and Junior Associates had fallen out of date, with a number of people who have not been active in GCRI and also some people whose work no longer matches the directions that GCRI is trying to go in.

Third, we have established a Senior Advisory Council, consisting of distinguished people in the global catastrophic risk field who are providing GCRI with invaluable input. Please note that the Senior Advisory Council is not equivalent to a Board of Directors. Legally, GCRI’s Board of Directors is that of our fiscal sponsor (parent organization), Social & Environmental Entrepreneurs. Our Senior Advisory Council exists solely to provide input to GCRI and holds no legal responsibility.

We are delighted to announce three Senior Advisors: Gary Ackerman, John Garrick, and Seán Ó hÉigeartaigh. Each of them is a distinguished individual in their respective fields, an accomplished researcher in the study of global catastrophic risk and related topics, and an experienced leader in the development and management of risk-oriented organizations. Their full bios are below. We are grateful for their ongoing input to GCRI.

Fourth, we anticipate new collaborations with our future work. At this time, the details of this are still being hashed out. Some of it will likely depend on the success of our end-of-year fundraising. We will announce this when it is ready.

We are excited about all of these changes to our affiliates program. We thank all of our previous affiliates for their contributions and wish them the best for their future work. We likewise look forward to the future of GCRI, which we believe can be very bright.

As stated above, here are the bios of our new Senior Advisors:

Gary Ackerman is Associate Professor in the College of Emergency Preparedness, Homeland Security and Cybersecurity at the University at Albany, State University of New York. He was previously a GCRI Associate and also Director of the Unconventional Weapons and Technology Division at the National Consortium for the Study of Terrorism and Responses to Terrorism (START), based at the University of Maryland. He advises GCRI in particular regarding the development of new research organizations as well as the national and international security dimensions of global catastrophic risk, including with the United States government and the Washington, DC policy community.

John Garrick is Distinguished Adjunct Professor of Engineering at the University of California, Los Angeles, founder of the B. John Garrick Institute for the Risk Sciences at UCLA, and a member of the National Academy of Engineering. He was previously co-founder, President, Chairman, and Chief Executive Officer of the company PLG, which was a leader in the application of risk analysis, especially in the nuclear power sector. Dr. Garrick himself is one of the pioneers of risk analysis. He advises GCRI in particular on the application of risk analysis to global catastrophic risk, the entrepreneurship needed to build up an independent organization, and how to get risk analysis and other research put to use for actual risk-reduction decision-making.

Seán Ó hÉigeartaigh is Executive Director of the Centre for the Study of Existential Risk at the University of Cambridge. He was previously based at the Future of Humanity Institute at the University of Oxford. He also helped establish the Leverhulme Centre for the Future of Intelligence, a joint project of Cambridge, Oxford, and the University of California, Berkeley. He advises GCRI on a range of matters related to the study of global catastrophic risk, the development and management of research institutes, and the wider community of individuals and organizations involved in global catastrophic risk.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.