Deep Learning and the Sociology of Human-Level Artificial Intelligence

by | 18 June 2020

Download Preprint PDF

The study of artificial intelligence has a long history of contributions from critical outside perspectives, such as work by philosopher Hubert Dreyfus. Following in this tradition is a new book by sociologist Harry Collins, Artifictional Intelligence: Against Humanity’s Surrender to Computers. I was invited to review the book for the journal Metascience.

The main focus of the book is on nuances of human sociology, especially language, and their implications for AI. This is a worthy contribution, all the more so because social science perspectives are underrepresented in the study of AI relative to perspectives from computer science, cognitive science, and philosophy. On the other hand, the book does not do well in its treatment of the AI techniques it addresses. A better book for that is Rebooting AI by Gary Marcus and Ernest Davis; I would recommend this for readers outside the field of computer science who would like to understand the computer science of AI.

Artifictional Intelligence argues that deep learning—the current dominant AI technique—cannot master human language because it is based on statistical pattern recognition of large datasets, whereas language often addresses novel situations for which data is scarce or absent. (Rebooting AI also makes this argument.) Artifictional Intelligence shows this via some clever and entertaining experiments, such as using Google Translate to translate certain phrases from English to another language then back into English. For example, “I field at short leg”, an expression from cricket, is more successfully translated to and from Afrikaans (“I field on short leg”) than Chinese (“I am in the short leg field”), which makes sense given the geography of cricket. (The translations listed here are from the time of writing this blog post. The translations constantly change as the Google Translate algorithm is updated and as it processes more data.)

The book further argues that for an AI to achieve human-level language ability, it would need to be embedded in human society. Only then would it master the nuances of human language. The book draws on Collins’s experience as a sociologist studying communities of gravitational wave physicists. Collins participated in imitation games in which he tried to pass himself off as a gravitational wave physicist, analogous to the well-known Turing test for AI. Collins attributes his own success at these games to his extensive time embedded in gravitational wave physics communities. This experience, as well as his understanding of the relevant sociology, prompts Collins to conclude that an AI would need to be similarly embedded in order to reach human-level ability in language.

One serious problem with the book is that it consistently treats human-level AI as a scientific endeavor without considering its ethical and societal implications. Collins wishes the field of AI was more like the field of gravitational physics in its narrow focus on big scientific breakthroughs. That is bad advice. The field of AI needs more attention to its ethical and societal implications, not less. AI has profound ethical and societal implications given its many current and potential future applications. AI experts need to participate in efforts to address these matters in order to ensure that these efforts are based on a sound understanding of the technology.

Academic citation:
Baum, Seth D., 2020. Deep learning and the sociology of human-level artificial intelligence. Metascience, vol. 29, no. 2 (July), pages 313-317, DOI 10.1007/s11016-020-00510-6.

Download Preprint PDFView in MetascienceView in ReadCube

Image credit: Wiley

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.