Collaborative Publishing with GCRI

by | 18 November 2021

Global catastrophic risk is a highly complex, interdisciplinary topic. It benefits from contributions from many people with a variety of backgrounds. For this reason, GCRI emphasizes collaborative publishing. We publish extensively with outside scholars at all career points, including early-career scholars who are relatively new to the field, as well as mid-career and senior scholars at other organizations who bring complementary expertise.

This post describes our approach to collaborative publishing and documents our collaborative publications. Researchers interested in publishing with GCRI should visit our get involved page. The primary way to initiate collaborations with GCRI is via our Advising & Collaboration Program; we also welcome unsolicited inquiries.

From our perspective, collaborative publishing serves several purposes.

First, collaborative publishing provides a broader base of expertise. The GCRI team consists of interdisciplinary researchers with broad backgrounds, but we still benefit from outside input. Global catastrophic risk is too broad of a topic for any group to have a background in every relevant topic. Indeed, a lot of important global catastrophic risk work is at the interface of disparate topics. Collaborating with people who have backgrounds in the topics we seek to study enables us to do more and better research.

Second, collaborative publishing strengthens professional networks. The field of global catastrophic risk benefits from its members having strong relationships. They enable us to work together effectively, share resources, recommend each other for opportunities, and more. There are many ways to cultivate relationships across the field. Collaborative publishing is one of the most robust. Collaborative publishing builds strong relationships among collaborators and gives them a deep understanding of each other’s expertise, abilities, working styles, etc. These relationships and understandings benefit the field as a whole. 

Third, collaborative publishing supports professional development. Collaborative publishing gives early-career researchers an invaluable opportunity to learn from more experienced researchers. They can improve their understanding of the intellectual substance of global catastrophic risk, learn about the nuts and bolts of the publishing process, and gain general wisdom about the field as a whole. Collaborative publishing gives mid-career and senior researchers a chance to sharpen their skills and ideas through the process of developing ideas and reaching consensus with other researchers who may have different backgrounds and perspectives.

Collaboration does pose some challenges. It requires a greater degree of coordination, deliberation, and consensus building. It can also be difficult when researchers have different backgrounds. Different disciplines often have different sets of jargon, paradigms for understanding the world, and standards for what constitutes good research. As an interdisciplinary research group with an emphasis on collaboration, GCRI helps to overcome these challenges and facilitate collaboration.

Further perspective on collaborative publication can be found in the GCRI paper Artificial interdisciplinarity: Artificial intelligence for research on complex societal problems. This paper discusses the cognitive challenges of interdisciplinary research and the prospects for AI to help overcome these challenges. Collaboration creates opportunities and challenges for interdisciplinary research, as discussed above.

A list of our prior collaborative publications is below. For our full list of publications, please visit our publications page.

For the purpose of the lists below we define “early-career” collaborators as those who had not yet completed a Ph.D. degree at the time of collaboration or were at a similar career point and “mid-career and senior” collaborators as those who had completed a Ph.D. degree at the time of collaboration or were at a similar career point. The early-career/mid-career distinction can also be defined in other ways. These definitions simply help to illustrate GCRI’s collaborations with people at different career points.

Publications With Early-Career and Pre-Ph.D. Collaborators

The following publications are co-authored by GCRI team members and early-career and pre-Ph.D. collaborators from outside of GCRI. “Early-career and pre-Ph.D.” is defined as people who have not yet completed a Ph.D. degree or are at a similar career point. Early-career and pre-Ph.D. collaborators’ names are in boldface.

Owe, Andrea and Seth D. Baum, 2021. Moral consideration of nonhumans in the ethics of artificial intelligenceAI & Ethics, vol. 1, no. 4 (November), pages 517-528, DOI 10.1007/s43681-021-00065-0ReadCube

Note: Andrea Owe joined the GCRI team while work on this paper was in progress.

Baum, Seth D. and Jonas Schuett, 2021. The case for long-term corporate governance of AIEffective Altruism Forum, 3 November.

Cihon, Peter, Moritz J. Kleinaltenkamp, Jonas Schuett, and Seth D. Baum, 2021. AI certification: Advancing ethical practice by reducing information asymmetriesIEEE Transactions on Technology and Society, vol. 2, issue 4 (December), pages 200-209, DOI 10.1109/TTS.2021.3077595.

Cihon, Peter, Jonas Schuett, and Seth D. Baum, 2021. Corporate governance of artificial intelligence in the public interestInformation, vol. 12, article 275, DOI 10.3390/info12070275.

Umbrello, Steven and Seth D. Baum, 2018. Evaluating future nanotechnology: The net societal impacts of atomically precise manufacturingFutures, vol. 100 (June), pages 63-73, DOI 10.1016/j.futures.2018.04.007

White, Trevor N. and Seth D. Baum, 2017. Liability law for present and future robotics technology. In Patrick Lin, Keith Abney, and Ryan Jenkins (editors), Robot Ethics 2.0, Oxford: Oxford University Press, pages 66-79. 

Baum, Seth and Trevor White, 2015. When robots killThe Guardian Political Science blog, 23 June. 

Barrett, Anthony M., Seth D. Baum, and Kelly R. Hostetler, 2013. Analyzing and reducing the risks of inadvertent nuclear war between the United States and RussiaScience and Global Security, vol. 21, no. 2, pages 106-133, DOI 10.1080/08929882.2013.798984.

Maher, Timothy M. Jr. and Seth D. Baum, 2013. Adaptation to and recovery from global catastropheSustainability, vol. 5, no. 4 (April), pages 1461-1479, DOI 10.3390/su5041461.

Honorable mentions:
Owe, Andrea and Seth D. Baum, forthcoming. The ethics of sustainability for artificial intelligenceProceedings of AI for People: Towards Sustainable AI, CAIP’21.

Baum, Seth D. and Andrea OweArtificial intelligence needs environmental ethicsEthics, Policy, & Environment, forthcoming. 

Note: Andrea Owe was on the GCRI team for the duration of work on these papers. These papers are therefore not an external collaboration, but are nonetheless examples of GCRI publishing with early-career researchers.

Publications With Mid-Career, Post-Ph.D., and Senior Collaborators

The following publications are co-authored by GCRI team members and mid-career, post-Ph.D., and senior collaborators from outside of GCRI. “Mid-career, post-Ph.D., and senior” is defined as people who have completed a Ph.D. degree or are at a similar career point. Mid-career, post-Ph.D., and senior collaborators’ names are in boldface.

Fitzgerald, M., Aaron Boddy, and Seth D. Baum, 2020. 2020 survey of artificial general intelligence projects for ethics, risk, and policy. Global Catastrophic Risk Institute Technical Report 20-1. 

Baum, Seth D., Anthony M. Barrett, and Roman V. Yampolskiy, 2017. Modeling and interpreting expert disagreement about artificial superintelligenceInformatica, vol. 41, no. 4 (December), pages 419-427. 

Baum, Seth D., David C. Denkenberger, and Joshua M. Pearce, 2016. Alternative foods as a solution to global food supply catastrophesSolutions, vol. 7, no. 4, pages 31-35. 

Baum, Seth D. and Bruce E. Tonn (editors), 2015. Confronting future catastrophic threats to humanity [special issue]. Futures, vol. 72 (September), pages 1-96. 

Baum, Seth D. and Bruce E. Tonn, 2015. Introduction: Confronting future catastrophic threats to humanityFutures, vol. 72 (September), pages 1-3, DOI 10.1016/j.futures.2015.08.004

Baum, Seth D., David C. Denkenberger, and Jacob Haqq-Misra, 2015. Isolated refuges for surviving global catastrophesFutures, vol. 72 (September), pages 45-56, DOI 10.1016/j.futures.2015.03.009.

Baum, Seth D., David C. Denkenberger, Joshua M. Pearce, Alan Robock, and Richelle Winkler, 2015. Resilience to global food supply catastrophesEnvironment, Systems, and Decisions, vol. 35, no. 2 (June), pages 301-313, DOI 10.1007/s10669-015-9549-2.

Baum, Seth D. and Itsuki C. Handoh, 2014. Integrating the planetary boundaries and global catastrophic risk paradigmsEcological Economics, vol. 107 (November), pages 13-21, DOI 10.1016/j.ecolecon.2014.07.024

Haqq-Misra, Jacob, Michael W. Busch, Sanjoy M. Som, and Seth D. Baum, 2013. The benefits and harm of transmitting into spaceSpace Policy, vol. 29, no. 1 (February), pages 40-48, DOI 10.1016/j.spacepol.2012.11.006

Publications With Both Early-Career/Pre-Ph.D. and Mid-Career/Post-Ph.D. Collaborators

The following publications are co-authored by GCRI team members and at least one early-career or pre-Ph.D. collaborator and one mid-career or post-Ph.D. collaborator from outside of GCRI. GCRI team members are in boldface.

Galaz, Victor, Miguel A. Centeno, Peter W. Callahan, Amar Causevic, Thayer Patterson, Irina Brass, Seth Baum, Darryl Farber, Joern Fischer, David Garcia, Timon McPhearson, Daniel Jimenez, Brian King, Paul Larcey, and Karen Levy, 2021. Artificial intelligence, systemic risks, and sustainabilityTechnology in Society, vol. 67, (November), article 101741, DOI 10.1016/j.techsoc.2021.101741

Baum, Seth D., Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy, 2019. Long-term trajectories of human civilizationForesight, vol. 21, no. 1, pages 53-83, DOI 10.1108/FS-04-2018-0037

Baum, Seth D., Timothy M. Maher, Jr., and Jacob Haqq-Misra, 2013. Double catastrophe: Intermittent stratospheric geoengineering induced by societal collapseEnvironment, Systems and Decisions, vol. 33, no. 1 (March), pages 168-180, DOI 10.1007/s10669-012-9429-y.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.