Recommendations to the OSTP on the National Artificial Intelligence Research and Development Strategic Plan

by | 18 March 2022

View GCRI’s submission to the OSTP on the National Artificial Intelligence Research and Development Strategic Plan.

On 1 January 2021, the National AI Initiative Act of 2020 became law as part of the National Defense Authorization Act. The National AI Initiative Act calls for regular updates to the National AI R&D Strategic Plan to include “goals, priorities, and metrics for guiding and evaluating how the agencies carrying out the National AI Initiative will support AI research. The Office of Science and Technology Policy (OSTP) requested input from the public on potential revisions to the strategic plan related to AI R&D. On 4 March 2022, GCRI responded to a Request for Information (RFI) and submitted this comment to the OSTP on updating the National Artificial Intelligence Research and Development Strategic Plan. 

GCRI supports the eight strategic aims of the National AI R&D Strategic Plan 2019 update and submitted recommendations for seven of the strategic aims. GCRI’s suggestions emphasize ethics, diversity, and inclusion throughout the field of AI R&D in order to reduce risks to humans and the environment. These suggestions are informed by and draw upon GCRI’s ongoing research on AI ethics and governance, which emphasize synergies between the governance needs for near-term and long-term AI issues.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.