October Newsletter: AI Governance Call for Papers

by | 15 October 2020

Dear friends,

I am editing a new special issue on “Governance of Artificial Intelligence” for the journal Information. The call for papers is online here. Please see details below. Please feel free to circulate this among others who may be interested.

Information is an open access journal with an author processing charge. GCRI is able to cover the author processing charge for a limited number of submissions. Interested authors should contact me directly about this.

Sincerely,
Seth Baum,
Executive Director

Call For Papers: Governance of Artificial Intelligence

https://www.mdpi.com/journal/information/special_issues/Governance_AI

Artificial intelligence (AI) technology is playing an increasingly important role in human affairs and in the world at large. The societal significance of AI technology demands effective governance via public policy and private activity alike. Successful AI governance must be dynamic and anticipatory so as to account for recent and future changes in both AI technology and the social systems that use it. Successful AI governance must also account for the uncertainties about both the technology and the social systems, including the possibility of rare extreme events. In addition, successful AI governance must have a sound ethical basis to ensure AI is developed and deployed in a way that is good or right. To succeed, AI governance needs to be grounded in rigorous, practical research that can inform real-world governance initiatives.

Therefore, the purpose of this special issue is to present the latest developments in AI governance. Investigators in the field are invited to contribute their original, unpublished works. Both research and review papers are welcome.

Topics of interest include but are not limited to:

  • Governance issues related to near-term, medium-term, and/or long-term AI  
  • Governance of AI by governments, corporations, universities, and other relevant institutions  
  • Evaluations of opportunities to improve AI governance  
  • Ethical principles underlying AI governance, and the practical implementation of these principles  
  • Management of risks created by AI systems  
  • Critical evaluation of existing AI governance initiatives  
  • Proposals for new AI governance concepts and activities  
  • Reviews of AI governance literature and articulation of directions for future research  
  • Lessons for AI governance from other domains of governance and/or other fields of study 

The final deadline for submissions is June 30, 2021. Submissions will be reviewed on a rolling basis and manuscripts will be published upon acceptance.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.