My Experience with the GCRI Advising/Collaboration Program: A Junior Collaborator from Singapore

by | 20 November 2019

The Centre for AI and Data Governance is fairly new. It mainly focuses on scholarship of Singapore law related to digital technology, for example fintech regulation or liability for autonomous vehicles. It’s a great place to learn about academia, government-facing work, and the overall AI ecosystem in Singapore. My colleagues do not focus on global catastrophic risk, but I have some freedom to pick my own projects as long as I can obtain external supervision.

I initially reached out to GCRI for general career advice. We soon realised that we were working on a similar topic: introducing a “commons” perspective to the analysis of AI development. There is a significant body of knowledge about managing common goods, especially natural resources, associated with people like Garett Hardin (“tragedy of the commons”) and Elinor Ostrom (who won the Economics Nobel Prize in 2009). We wanted to apply these lessons to the governance of AI. I was sitting on a kernel of an idea, while GCRI had an almost-fully-written paper. Instead of scaring me off, or staking their claim to this topic, they invited me to collaborate.

So far, we’ve submitted a conference paper (still under review), and are working on expanding the paper into a longer, journal-ready version. In the process, I’ve learnt a lot of specific things that would have otherwise taken me much longer to figure out: lines of inquiry to pursue, how to manage references, formatting for conference submission, and so on. Working with GCRI has also been generally motivating and encouraging. Research is often solitary, but it shouldn’t be isolating. Being able to chat about all manner of things with more experienced people who care about the long term keeps me going in the day-to-day. 

I know that I’m extremely lucky to be in a position where I get to take an interest in and work on these big-picture issues. If it weren’t for GCRI, I’d be much less likely to be doing so. I’ve tried to pay their guidance forward by advising interested people and organising discussion groups in Singapore. It’s still early days, but hopefully GCRI can have an outsized impact on the global catastrophic risk community in the region.

Image credit: Jia Yuan Loke

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.