2023 GCRI Fellowship Program

by | 28 December 2023

GCRI is pleased to announce the 2023 Fellowship Program. The Fellowship Program aims to highlight exceptional collaborators GCRI had the opportunity to partner with over the course of the year.

This year, we have three 2023 Fellows. One of them is collaborating with GCRI on an innovative research project on psychological and behavioral dimensions of AI governance. The other two are collaborating from their positions in new global catastrophic risk organizations in Africa and the Spanish-speaking world. These organizations are part of a broader initiative to advance a more geographically diverse field of global catastrophic risk, an initiative that GCRI is proud to contribute to.

Congratulations to our 2023 GCRI Fellows.

Natalie Kiilu
Nairobi

Natalie Kiilu is a Program Associate of the ILINA Program, an organization that supports people across Africa working on global catastrophic risk and related topics. She is also a fellow at Impact Academy and is studying biosecurity and pandemic preparedness. She previously completed an LLB degree in Law from Strathmore University. She is collaborating with GCRI on advancing the field of global catastrophic risk in Africa and worldwide.

 

 

 

En Qi Teo
Toulouse

En Qi Teo is currently pursuing an MSc in Economics from the Toulouse School of Economics. She obtained a BS in Economics from the National University of Singapore. Her primary research interests are political economy and behavioral economics. En Qi is collaborating with GCRI on cognitive biases in human use of AI systems, with a focus on the halo effect with large language models.

 

 

 

Roberto Tinoco
Bogotá

Roberto Tinoco is a Scientific Diplomat with Riesgos Catastróficos Globales, an organization that works on policy for global catastrophic risk across the Spanish-speaking countries. He previously worked for the government of Colombia in the Ministry of Foreign Affairs and the Dirección Nacional de Inteligencia (National Intelligence Directorate). Roberto is collaborating with GCRI on advancing the field of global catastrophic risk in the Spanish-speaking countries and worldwide.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.