New Organization Directory Resource

by | 4 December 2012

GCRI is pleased to announce the publication of its newest resource for the global catastrophic risk community, a directory of GCR organizations. The resource features 117 organizations, each with an annotation describing the organization.

The organizations listed here work on many different aspects of GCR. The organizations cover many specific GCRs and many approaches to addressing the risks. There are also many types of organizations, including think tanks, university research groups, government agencies, and private foundations. The organizations come from many different countries, though most are in the United States. Indeed, despite the large number of organizations we have identified, we expect that we are missing many, especially organizations based outside the U.S. We encourage you to let us know of any organizations that we have overlooked, either in the comment thread below or via our contact form.

Work towards this directory was contributed by several people: Tony Barrett, Tim Maher, Grant Wilson, and myself from GCRI, plus Nick Beckstead, Gordon Irlam, Jonatas Müller, and Lennart Stern. Tim Maher in particular deserves credit for merging several previous lists, pulling out some unrelated organizations, and finally writing the careful annotations accompanying each organization.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.