GCRI 2019 Plans: Organization Development

by | 6 December 2018

GCRI may now be at a turning point. We have spent years advancing our understanding of global catastrophic risk, assessing the best role for our organization, and developing the capacity to grow the organization and its impact on the risks. We have established ourselves as leaders of global catastrophic risk scholarship and of the wider community that works on global catastrophic risk. GCRI has an important and distinctive role to play on global catastrophic risk. We believe the time has come for GCRI to scale up.

GCRI’s primary organization development challenge is to “jump the gap” from a small to a large organization. GCRI is currently caught on the wrong side of a chicken-and-egg problem in which organizations need to be large in order to attract the large amounts of funding needed to be large. This is a common problem in the nonprofit sector and has been documented for the organizations working on global catastrophic risk. GCRI is full of untapped potential that can only be realized if we jump the gap and become a large organization.

The remainder of this blog post describes why GCRI should scale up, how we would scale up, and our plans for specific projects. It is based on extensive conversations with people at other global catastrophic risk organizations as well as on our own analysis of what GCRI’s role should be. This blog post focuses primarily on administrative matters. A separate blog post describes our 2019 plans for the various global catastrophic risk topics that we work on: aftermath of global catastrophe, artificial intelligence, cross-risk evaluation & prioritization, nanotechnology, nuclear war, risk & decision analysis, and solutions & strategy.

Why GCRI Should Scale Up

GCRI should scale up for several reasons: it plays an important and distinctive role on global catastrophic risk, it could play that role better if it operated at a larger scale, it is an efficient and cost-effective organization, and there is a need for more large global catastrophic risk organizations and for our particular capabilities.

GCRI’s essential role is to advance an intelligent and practical response to global catastrophic risk. GCRI sits at the intersection of deep academic scholarship and real-world decision-making. We lead a research program centered on identifying and evaluating opportunities to reduce the risks and an outreach program to turn ideas into action. We also help people at all career points advance their understanding of global catastrophic risk and get more involved. We do all this in close coordination with the wider community of individuals and organizations working on global catastrophic risk.

Since our founding, GCRI has played a central role in the global catastrophic risk community, and we continue to do so today. Speaking from this insider vantage point, I can say that while some other organizations do some aspects of what GCRI does, no one pulls it all together like GCRI. I can also say that there is a lot of important work that is not getting done right now but that GCRI could do if it were able to scale up. Without in any way diminishing the work being done by other organizations, there is clearly an important role for GCRI.

GCRI’s role derives in part from our unique expertise on global catastrophic risk. We are top scholars in the field and recognized as such by leading academic groups. Three aspects of our expertise stand out. First, we work across the full range of global catastrophic risks, which enables us to assess a wide range of risk-reduction opportunities and draw on a wide body of knowledge and experience. Second, we are pioneers in the application of risk and decision analysis to global catastrophic risk, which is essential for evaluating priorities and tradeoffs for many important decisions. Third, we have deep and practical social science expertise, heavily informed by our outreach to decision-makers, which enables us to develop solutions that can be implemented in the real world. Our expertise has been developed over years of study and work and cannot readily be replicated elsewhere.

GCRI’s role also derives from our unique institutional structure. In particular, we are an independent nonpartisan think tank based in the United States and with no central office. As a think tank, we specialize in translating between the world of scholarship and the world of decision-making. As an independent organization, we are agile and flexible, able to adjust our focus as opportunities arise. Our home in the US provides us with a wealth of important outreach opportunities, including to the US government and to private industry. Our nonpartisan profile enables us to do outreach across the political spectrum. Finally, because we have no central office, we are skilled at working with people across the US and around the world. We designed GCRI this way in order to maximize our impact on the risks. This institutional design has already proven successful and, as discussed below, it will be even more valuable as GCRI scales up.

A recent example may help to illustrate GCRI’s role. In 2018, we spent much of our time on outreach on artificial intelligence to the US national security policy community. (Some details are discussed here.) We had not specifically planned for this work, but we had the flexibility to focus on it when important opportunities arose. The opportunities arose because of our ongoing outreach to the US national security policy community and because of our established expertise on both AI and national security (especially nuclear weapons). Our technical expertise and our solutions & strategy research also mean that we know what to say in these conversations. Throughout the process, it helped a lot to have Tony Barrett in Washington, DC and me in New York and able to be in DC as needed. Our work also benefited from our close relations with other individuals and organizations that are active on AI and global catastrophic risk; we facilitated their participation and accomplished more as a result. These important outreach activities would not have happened without GCRI. These are among the sorts of activities we could do more of if we are able to scale up.

GCRI is able to accomplish all this with remarkable efficiency and cost-effectiveness. We have no central office and no significant office expenses. Our parent organization, Social & Environmental Entrepreneurs (SEE), provides comprehensive back-end administrative support for a very low 6.5% overhead rate. GCRI is thus able to dedicate an unusually large portion of our time and income to project work. Furthermore, despite SEE’s low overhead rate, they process grants, contracts, hiring, and related matters with little hassle and fast turnaround. That makes GCRI an attractive option not just for our core work but also for discretionary hiring (people who need to be hired somewhere but it doesn’t matter where) and for the lead/prime contractor role on multi-institution grants. This is all the more reason for GCRI to scale up.

Finally, there is a need for more large global catastrophic risk organizations, especially organizations like GCRI. As noted above, large organizations are more attractive to funders, especially large funders. This is because large organizations can handle larger projects and can do much of the talent evaluation and project supervision that funders would otherwise have to do themselves. Furthermore, the wider global catastrophic risk community currently finds itself limited less by funding and more by the availability of senior talent and high-value projects. Those are two problems that a scaled-up GCRI could readily help with. This makes right now an especially good time for GCRI to scale up.

How GCRI Would Scale Up

GCRI should scale up, and we can. We know who our initial hires will be. We have mapped out the next steps we need to take to succeed. We even have networks in place to provide us with the critical feedback needed to refine our activities as we scale up. In short, GCRI is a mature organization that is well prepared to scale up.

Once funding is in place, our first step will be to bring Tony Barrett and Robert de Neufville on full-time. Along with myself, they make up GCRI’s leadership team and are essential building blocks for the organization. Tony Barrett is an expert in risk and decision analysis, international security, and outreach to the US government. Robert de Neufville is an expert in social science, the use of expert judgment, and real-world politics. They both have expertise across several global catastrophic risks, intimate knowledge of the GCRI organization, and the capacity to supervise other people. They have been invaluable to GCRI in the limited hours GCRI has been able to offer them. With dedicated funding, they could do much more.

We may have the opportunity to make at least one more senior hire at this time. We are not at liberty to discuss details of this publicly, but we can discuss privately in select conversations. Please contact me directly to inquire further.

With a base of Tony Barrett, Robert de Neufville, myself, and possibly other(s), GCRI could scale up further. We would tap our wide networks to identify further collaborators. We expect a robust response due to our excellent reputation, our interdisciplinary agenda, and our ability to work with people anywhere in the world. Our geographic flexibility is especially important for attracting senior contributors, who are often settled in their respective locations and less willing or able to relocate. Indeed, we are often approached by senior (and junior) colleagues who are interested in getting more involved.

We anticipate that GCRI will scale up via a mix of full-time and part-time hires, temporary contract work, and subcontracts to other organizations. Indeed, we already have such a mix at a limited scale. We find that this mix enables us to get the most out of our funding, and we expect that to remain the case as we scale up.

As we scale up, we will continue to solicit feedback and adjust accordingly, just as we’ve done in preparation of these plans and as we have done throughout our existence. We are senior leaders of the global catastrophic risk field, but we nonetheless recognize that we always have room to improve.

Project Work

Fundraising and scaling up will be a primary focus for GCRI in 2019, but it will not be our only focus. This section describes some other project work we’re planning. The project work described here cuts across the global catastrophic risk topics we work on (AI, nuclear war, etc.). For our 2019 plans on the global catastrophic risk topics, please see here.

In 2019, we plan a major focus on outreach, especially to policy and industry communities. We are increasingly of the view that our impact on global catastrophic risk is limited less by research and more by outreach. This holds for the wider global catastrophic risk community and especially for GCRI given our role within the community. We also focused on outreach in 2018, with great success. We plan to build on these successes in 2019 and to scale them up if resources permit.

We plan to increasingly align our research agenda with our outreach. This includes outreach to our colleagues in global catastrophic risk who need to decide how to prioritize their efforts—a central theme of our work on risk and decision analysis and cross-risk evaluation and prioritization. This also includes outreach to policy and industry communities that do not necessarily share our interest in global catastrophic risk. Indeed, how to handle such circumstances is a central theme of our work on solutions and strategy. Specific research plans are discussed in our blog post on global catastrophic risk topics.

We also plan to focus on developing the global catastrophic risk talent pool. Because of our senior status within the field of global catastrophic risk, we are able to provide mentoring and guidance to people at all career points. This work will be synergistic with our efforts to scale up: developing the talent pool will help us connect with people who can work with us, and as we expand we can do more to develop the talent pool. The work will also be synergistic with our ongoing collaborations with the wider global catastrophic risk community, which stands to benefit from our support of the talent pool.

One new type of project we could do when our funding is more secure is to help global catastrophic risk funders identify and evaluate additional funding opportunities. We believe there is a general need for more evaluation of talent and projects among global catastrophic risk funders. This task requires the kind of robust technical knowledge and professional networks GCRI has, and some funders have approached us for advice. While we have provided some advice, we have a conflict of interest as long as our own work is inadequately funded. When we secure funding it will easier for us to offer our assistance to funders. Such work would be synergistic with GCRI’s support for the global catastrophic risk talent pool and our plans for further growth.

Finally, one other project we have planned for 2019 is to redesign our website. This was a good website when GCRI was launched in 2011, but the internet has changed since then, in particular due to the rise of smartphones and other small touch screens. This is an important project that we intend to pursue once things settle down after our current stretch of fundraising.

Throughout our project work, we plan to remain in dialogue with the wider global catastrophic risk community. This will ensure that we are not duplicating effort and that we are exploiting synergies between our work and theirs. It will also provide us with the critical feedback we need to continue refining our efforts.

Concluding Remarks

GCRI plays an important and distinctive role on global catastrophic risk. We are poised to play this role at a larger scale. Scaling up is our primary organization development focus for 2019. This is an excellent time for GCRI to start scaling up, given our capacity and the needs of the wider global catastrophic community. We hope to finally get past the chicken-and-egg funding problem that has held us back as an organization, and we look forward to discussing with anyone who can help with this.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.