In Memory of John Garrick

by | 22 February 2021

As histories of risk analysis document (e.g. this and this), the field originated in large part in the early nuclear power industry. Garrick was right in the middle of this with his work at his risk consulting firm PLG, as well as at the U.S. Atomic Energy Commission, the Nuclear Waste Technical Review Board, and the Society for Risk Analysis. His early work on the concept of risk remains highly relevant today and is an essential read for anyone new to the field of risk analysis.

Garrick’s contributions to global catastrophic risk are numerous. In 2009, he published the book Quantifying and Controlling Catastrophic Risks, which outlines a framework for applying quantitative risk analysis techniques to catastrophic risk that inspired GCRI’s own work on risk and decision analysis. In 2014, Garrick helped found the Garrick Institute for the Risk Sciences at UCLA. The Institute has hosted annual conferences on GCR and published conference proceedings (see this and this). GCRI participated in these conferences and contributed two papers (this and this).

For further information on Garrick’s illustrious career, please see this from the Garrick Institute.

We at GCRI send our heartfelt condolences to the Garrick family, and we look forward to maintaining a strong relationship with our colleagues at the Garrick Institute.

Image credit: The B. John Garrick Institute for the Risk Sciences

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.