AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

by | 2 June 2021

Download Preprint PDF

Certification is widely used to convey that an entity has met some sort of performance standard. It includes everything from the certificate that people receive for completing a university degree to certificates for energy efficiency in consumer appliances and quality management in organizations. As AI technology becomes increasingly impactful across society, there can be a role for certification to improve AI governance. This paper presents an overview of AI certification, applying insights from prior research and experience with certification in other domains to the relatively new domain of AI certification. The paper is co-authored by Peter Cihon of GitHub, Moritz Kleinaltenkamp of the Hertie School Centre for Digital Governance, Jonas Schuett of Goethe University Frankfurt and the Legal Priorities Project, and Seth Baum of GCRI.

The paper is part of a forthcoming collection on AI soft law edited by Carlos Ignacio Gutierrez and Gary Marchant of Arizona State University as part of the ASU program Soft-Law Governance of Artificial Intelligence. Soft law is a class of governance mechanisms involving guidelines that are not legally binding. Certification programs are often not legally binding—for example, students are generally not required by law to graduate from university, and companies are often not required to pursue certification for their products. Other types of soft law include recommended best practices published by either public or private institutions, ethics committees that advise organizations, and scorecards of organization practices published by outside NGOs. Soft law is attractive for emerging technology governance because it is relatively easy to establish and keep up to date as the technology changes, and because it can serve as a step toward stricter hard law.

The paper presents what we believe to be the first-ever research study of AI certification and therefore serves to establish essential fundamentals of the topic. As the paper explains, a primary role of certification is to reduce information asymmetries. People on the outside don’t know what’s going on in the inside. Certification sheds some light. For example, which degree(s) a university graduate receives provides some information about what the student has learned and is able to do. The information provided by certification can help people on the outside select desirable suppliers of a good or service and monitor whether the supplier is meeting specified agreements. In reducing the asymmetry of information between insiders and outsiders, certification can further serve to incentivize good behavior by the insiders. For example, a student may try harder to learn if doing so will be rewarded with a university degree. The value of certification also depends on having the right thing be certified, such as by having university degrees require studying topics that are in fact valuable to learn.

The paper surveys the current landscape of AI certification, identifying seven active and proposed programs: (1) the European Commission White Paper on Artificial Intelligence, (2) the IEEE Ethics Certification Program for Autonomous and Intelligence Systems, (3) the Malta AI Innovative Technology Arrangement, (4) the Turing Certification proposed by Australia’s Chief Scientist, (5) the Queen’s University executive education program Principles of AI Implementation, (6) the Finland civics course Elements of AI, and (7) a Danish program in development for labeling IT-security and responsible use of data. These programs demonstrate the variety of forms AI certification can take, including both public and private, certifying both individuals and groups, and covering a range of AI-related activities.

Finally, the paper addresses the potential value of certification for future AI technology. Some aspects of certification will likely remain relevant even as the technology changes. For example, the various roles of corporations, their employees and management, governments, and other actors tend to stay the same. Likewise, certification programs can remain relevant over time by emphasizing human and institutional factors. Programs can also build in mechanisms to update their certification criteria as AI technology changes. Looking further into the future, it could eventually become difficult to certify advanced human-level AI systems because the systems could exceed the capacity of human governance. Certification may nonetheless play a constructive role in governance of the processes that lead to the development of such advanced systems. Certification could be especially valuable for building trust among rival AI development groups and ensuring that advanced AI systems are built to high standards of safety and ethics.

In summary, certification can be a valuable tool for AI governance. It is not a panacea for ensuring ethical AI, but it can help especially for reducing information asymmetries and incentivize ethical AI development and use.

This paper has also been summarized in the AI Ethics Brief #69 of the Montreal AI Ethics Institute.

Academic citation:
Cihon, Peter, Moritz J. Kleinaltenkamp, Jonas Schuett, and Seth D. Baum, 2021. AI certification: Advancing ethical practice by reducing information asymmetries. IEEE Transactions on Technology and Society, vol. 2, issue 4 (December), pages 200-209, DOI 10.1109/TTS.2021.3077595.

Download Preprint PDFView in IEEE Transactions on Technology and Society

Image credit: Buckyball Design

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.