GCRI 2018-2019 Financial Position

by | 11 December 2018

GCRI has long been doing excellent work at a small scale. Some recent work is summarized in our blog post on 2018 accomplishments. However, we have been very limited by a lack of robust funding. We could do much more if we had the funding needed to scale up. Therefore, GCRI is now trying to raise a larger amount of funding than in the past. Specifically, we are currently seeking a total of $1.5 million. Anyone interested in contributing to this can do so via our donate page or by contacting me directly.

UPDATE: GCRI has received a $250,000 donation, putting us partway to our $1.5 million goal.

The case for why GCRI should scale up is made in detail here, and further detail on our planned global catastrophic risk work is detailed here. In short, GCRI has an important and distinctive role to play, rooted in our deep expertise across the range of risks, our focus on translating between advanced scholarship and real-world decision-making, our wide networks across the communities that work on global catastrophic risk and related topics, and our highly flexible organization structure. We also have the capacity to start scaling up immediately, starting with bringing Tony Barrett and Robert de Neufville on full-time. Along with myself, they make up GCRI’s leadership team and are essential building blocks for the organization. They are also senior experts on global catastrophic risk. Once their position in GCRI is secure, we plan to expand further by tapping into our wide networks.

Funding of $1.5 million would fully cover Tony Barrett, Robert de Neufville, and myself over three years, plus a modest discretionary budget for additional hires, contract work, and/or any other expenses. It would be a solid start for GCRI scaling up. Having GCRI’s full leadership team available full-time for three years would ensure that we could work effectively to build up the organization.

As we start to scale up, we expect that we will identify excellent opportunities that we would need substantially more than $1.5 million to pursue. Therefore, it would not be unreasonable for GCRI to be funded at more than this amount. However, we would also be happy to raise new funds as those opportunities arise. We are eager to work with funders who want to partner with us on our scaling up process and who have the flexibility to make additional funds available as warranted by our work and our subsequent opportunities.

A three-year, $1.5 million budget would be substantially larger than previous and current GCRI budgets. For the last several years, including 2018, GCRI has operated on a budget in the range of $100,000 to $150,000. We had to deplete our funds each year and had no financial stability. This amount of funding has covered myself working full-time plus part-time work from other people, mainly Tony Barrett and Robert de Neufville. I have frequently taken pay at a substantially sub-market rate in order to keep the organization running. Our small budget is worth bearing in mind when evaluating our accomplishments. Indeed, one analysis from a year ago found GCRI’s budget to be “shockingly low considering their productivity”.

Our low budget has greatly restricted our productivity. Tony Barrett and Robert de Neufville both spend most of their time working other jobs that are not focused on global catastrophic risk in order to earn enough income. I have taken occasional outside jobs as well. Additionally, we have cut costs at the expense of productivity in various other ways, like doing tasks ourselves that would be better to outsource. While we intend to continue to keep our costs low so as to maximize our output, there are some things that would be well worth spending more money on. Finally, our low budget creates substantial insecurity and forces us to spend a relatively large portion of our time fundraising, and for those reasons greatly reduces our contributions to addressing global catastrophic risk.

There are several reasons why GCRI has not previously raised more funding. One is a chicken-and-egg problem in which organizations need to be large in order to attract the large amounts of funding needed to be large. This is a common challenge for many nonprofit organizations and is documented here for global catastrophic risk organizations. It creates a bias toward funding large organizations, which are not always the ones that need it most. This has been a rather substantial challenge for us and is one of the main reasons we are focusing heavily on scaling up, as discussed in detail here. I would encourage potential funders who are reluctant to give GCRI more than we have previously received to be mindful of the chicken-and-egg problem and to consider GCRI’s capacity to use more funding productively.

A second reason is that there are relatively few sources of funding for global catastrophic risk work. For example, both the National Science Foundation and the more mission-oriented government funding programs generally favor funding work on risks that they believe are higher probability and less speculative. We have received some government funding from the Department of Homeland Security, but this was from a relatively limited grant program that is no longer active. We generally refrain from pursuing funding that would be too far removed from our focus on global catastrophic risk, even if this limits our funding opportunities.

A third reason is that while there is some dedicated global catastrophic risk funding, it is almost always designated for work on specific risks, whereas GCRI specializes in working across risks. We have gotten more funding in recent years by emphasizing work on specific risks, especially artificial intelligence and nuclear weapons. This is important work, but it fails to address the many important cross-risk issues. We hope that more funding can be made available for work that cuts across multiple global catastrophic risks.

Finally, I have not always been the most capable fundraiser. While it has sometimes been a challenging fundraising environment, I am working to improve our fundraising performance, with input from our advisors and colleagues in partner organizations.

Ultimately, what’s important is not how large or well-funded GCRI is, but how successfully global catastrophic risk is being addressed. GCRI has been designed from the start to optimize our impact on the risks and to complement the work being done by other organizations. We are already doing excellent work at a small scale, but we could be doing much more if we had the resources to scale up. In order to advance the goal of reducing global catastrophic risk, we hope that we can raise the resources to do so.

To support GCRI, please visit our donate page or contact me directly.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.