2020 Annual Report

by | 24 November 2020

2020 has been a challenging year for GCRI, as it has been for so many other organizations. The pandemic we are currently living through is, by some definitions, a global catastrophe. COVID-19 has already killed more than a million people worldwide, and has disrupted the work and lives of many others. At the same time, political turmoil in the US and around the world has demanded our attention and created both new risks and new opportunities.

Fortunately, GCRI is relatively well-positioned to operate under these conditions. GCRI was built from the start for remote collaboration, so the need for social distancing has had a relatively limited effect on our operations. Because we do not operate out of a central office, we do not have any expenses associated with office space we cannot use.  Additionally, the pandemic has, if anything, increased overall interest in and awareness of global catastrophic risks.

Nonetheless, the events of 2020 have taken a toll on all of us. Our hearts go out to all those who have been affected more severely.

Looking ahead, we hope that the COVID-19 pandemic can galvanize action to address all types of global catastrophic risk. The pandemic does create a rare window of opportunity in which human society is acutely aware of the importance of global catastrophic risk. As a leader in the field of global catastrophic risk, GCRI is positioned to translate that interest into action. This is one priority for our 2020 work, especially if we have the funds needed to pursue this important work.

This post summarizes what GCRI accomplished in 2020, what we plan to do in 2021, and the funding we are seeking to execute these plans. GCRI posted similar year-end summaries in 2018 and 2019.

2020 Accomplishments

GCRI made substantial progress this year in each of our primary focus areas: research, outreach, community support, and organization development. GCRI also published formal statements on the COVID-19 pandemic and on racism.

Research

GCRI has seven publications so far in 2020:

Accounting for violent conflict risk in planetary defense decisions, in Acta Astronautica. This paper analyzes three points of intersection between violent conflict risk and programs that defend Earth from asteroids, comets, and meteors. First, planetary defense can serve as a model for successful global risk management. Second, nuclear explosives can be used to deflect or disrupt incoming asteroids, comets, and meteors. Third, when asteroids, comets, and meteors collide with Earth, they cause explosions that can be misinterpreted as violent attacks. The paper is an example of GCRI’s work on cross-risk evaluation & prioritization.

Artificial interdisciplinarity: Artificial intelligence for research on complex societal problems, in Philosophy & Technology. This paper surveys opportunities for AI to support interdisciplinary research on major societal challenges like global catastrophic risk. The sheer cognitive challenge of interdisciplinary research is a major impediment. AI already provides some support, such as via the Existential Risk Research Assessment  project which uses a custom artificial neural network to produce recommendations of literature on catastrophic risk. More advanced AI systems could provide better support, though the most advanced systems could also pose significant risks. The paper is an example of GCRI’s work on artificial intelligence.

Medium-term artificial intelligence and society, in Information. This paper develops the idea of medium-term AI, which has been relatively neglected in comparison with near-term and long-term AI. The paper analyzes the importance of medium-term AI as a point of common ground between factions focused on near-term and long-term AI. The paper is an example of GCRI’s work on artificial intelligence.

Quantifying the probability of existential catastrophe: A reply to Beard et al., in Futures. This paper discusses the challenge of quantifying the risk of global and existential catastrophe. It is written in response to a recent article by Simon Beard, Thomas Rowe, and James Fox. The GCRI paper finds that higher-quality quantification requires more research effort. The paper further discusses the circumstances in which quantification is worth the effort. The paper is an example of GCRI’s work on Risk & Decision Analysis.

Deep learning and the sociology of human-level artificial intelligence, in Metascience. This paper reviews the book Artifictional Intelligence: Against Humanity’s Surrender to Computers by sociologist Harry Collins. The book provides a valuable social science perspective to the study of AI, especially regarding the capacity of AI for processing human language. It argues that current data-driven AI paradigms are inadequate, and that human-level language ability requires new paradigms. The paper is an example of GCRI’s work on artificial intelligence.

The unthinkable is possible, in California Magazine. This essay argues that infectious disease experts have known for a while that there was a substantial risk of a pandemic from a novel coronavirus originating in animals, but because decision-makers do not have enough incentive to prepare for and respond quickly to emerging catastrophes, society was slow to act when the pandemic began. It argues that the COVID-19 pandemic shows that society needs to commit resources to prevent catastrophes while they still seem far away, because by the time it is clear action is needed it may be too late. This essay is an example of GCRI’s work on Solutions & Strategy.

The Defense Production Act and the failure to prepare for catastrophic incidents, in War on the Rocks. This essay looks at the Defense Production Act and the US’ systemic failure to prepare for catastrophes like the COVID-19 pandemic. It argues that the executive branch’s ad hoc application of the Defense Production Act’s authorities to the pandemic was an example of how both Republican and Democratic administrations failed to develop or adapt the Act’s tools for 21st century threats. This essay is an example of GCRI’s work on Solutions & Strategy.

Outreach

Although the pandemic has made in-person outreach difficult, we have nevertheless continued to do a variety of outreach activities in 2020. We gave remote presentations to the AGI-20 Virtual Conference, the Cambridge University Centre for the Study of Existential Risk, and the Duke Center on Risk. We also engaged in several policy outreach activities, including to the IEEE on the development of a new technical standard on AI governance and to the US National Security Commission on Artificial Intelligence.

Community Support

In 2020, GCRI has continued to support the broader global catastrophic risk community. In 2019, we initiated a formal advising and collaboration program. In 2020, we ran a new round of the program. We spoke with 50 people from around the world, ranging from undergraduates to senior scholars and professionals. Several of these people went on to contribute to our other project work.

Organization Development

GCRI’s two main organizational development priorities for 2020 were to increase our administrative capacity and expand our network of active project collaborators. We have been successful on both fronts. For our administrative capacity, we have hired McKenna Fitzgerald to the position of Project Manager and Research Assistant. Ms. Fitzgerald has done excellent work across a wide range of GCRI activities. For our network of active project collaborators, we have recruited select people from our advising and collaboration program and initiated projects with other people from our networks. In doing so, we have positioned GCRI to scale up further as funds permit.

2021 Plans

We expect the pandemic to remain disruptive for much of 2021. It will probably continue to affect productivity by disrupting day-to-day workflows. It will probably continue to disrupt a wide range of professional activities, such as meetings and conferences. It could also continue to disrupt the economy, with potential implications for GCRI’s financial planning. At the same time, it could continue to heighten interest in catastrophic risk, creating opportunities for GCRI and our colleagues.

Our expectations for the pandemic have several implications for our 2021 plans. First, we plan to promote broader effort to reduce global catastrophic risk, leveraging the concern generated by the pandemic. Second, we plan to adopt a relatively conservative growth strategy to hedge against possible declines in funding and workflow capacity. Third, we plan to look for opportunities as we grow to help people who have been affected by the pandemic, such as funding researchers who have had a harder finding work since the pandemic started.

With these considerations in mind, here are some of our specific plans for 2021. The details of our plans depend significantly on the funding we receive. Nonetheless, they provide a general outline of the work we intend to do in 2021.

First, we plan to continue our policy outreach and related research. We were encouraged by the success of our policy outreach program in 2020 and anticipate compelling opportunities to influence policy in 2021. We plan to continue to align some of our research with our policy outreach. We also plan to leverage the window of opportunity generated by the COVID-19 pandemic and translate that into broader efforts to address global catastrophic risk.

Second, we plan to continue our community building efforts via at least one new round of our advising and collaboration program. We will continue to align the program with our  research and outreach projects. Additionally, we are currently exploring opportunities to use the program to improve the demographic diversity of the field of global catastrophic risk. (See also the GCRI Statement on Racism.)

Third, we plan to continue to work on a portfolio of research projects that advance a fundamental understanding of global catastrophic risk and practical knowledge of how to effectively address the risk. In particular, we anticipate building on our recent work by continuing to do substantial work on artificial intelligence, especially questions of ethics and governance. Additionally, we anticipate continuing work on risk and decision analysis and cross-risk evaluation and prioritization, especially related to questions of risk quantification and the evaluation of decisions under extreme uncertainty.

Fourth, we plan to grow the organization by further expanding our network of external collaborators and by making select full-time hires. In 2020, we had great success funding collaborators on a part-time and temporary basis. That allowed us to work with talented people, some of whom were not available for full-time work. In 2021, we plan to scale up these collaborations. Additionally, as funding and talent permits, we plan to hire one or more of our most talented collaborators full time.

Fundraising

GCRI currently operates on an annual budget of approximately $300,000. We have enough reserves to continue to operate through the beginning of 2022.

We are currently seeking to raise funds to expand our current operations and maintain them further into the future. We graciously welcome any support. Prospective contributors can visit our donate page or contact me directly.

Thanks to increased funding over the past year, we were able to hire McKenna Fitzgerald to the new position of Project Manager and Research Assistant. Ms. Fitzgerald has done excellent work and having someone with her skills on the team positions GCRI to continue expanding. Additionally, our advising and collaboration program and our extensive professional networks put us in excellent position to continue to expand if we can raise the money to do so.

Conclusion

Despite the unusual challenges posed by the events of 2020, GCRI has had a reasonably productive year. We believe we are well-positioned to have another good year in 2021. The pandemic is likely to pose challenges and create opportunities, but GCRI is well-positioned to tackle them as they arise. We are excited about what we can accomplish in 2021.

Note: This page was originally published as Summary of 2020-2021 GCRI Accomplishments, Plans, and Fundraising“.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.