Miles Brundage to Deliver Online Lecture on Social Science and Artificial General Intelligence 25 July

by | 3 June 2013

This is the pre-event announcement for an online lecture by Miles Brundage, a PhD student in Human and Social Dimensions of Science and Technology at Arizona State University.

Here is the full talk info:

A Social Science Perspective on Global Catastrophic Risk Debates: The Case of Artificial General Intelligence
Thursday 25 July 2013, 17:00 GMT (10:00 Los Angeles, 13:00 New York, 18:00 London)
To be held online via Skype or equivalent. RSVP required by email to Seth Baum (seth [at] gcrinstitute.org). Space is limited.

Abstract: Researchers at institutions such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI) have suggested that artificial general intelligence (AGI) may pose catastrophic risks to humanity. This talk will contextualize such concerns using theories and frameworks drawn from science and technology studies (STS) other social science fields. In particular, I will seek to answer the question: what are the conceptual and practical tools available to a non-technical scholar, citizen, or policy-maker seeking to address global catastrophic risks from particular technologies? Using AGI as a case study, I will illustrate relevant concepts that could inform future work on global catastrophic risks such as boundary work (the rhetorical techniques scientists use to demarcate their own work from the speculations of futurists and journalists and thereby cement their own credibility while distancing themselves from potential catastrophic consequences of their disciplines), visioneering (the articulation of and attempt to bring about speculative technological futures, such as those of Eric Drexler in the case of nanotechnology and Ray Kurzweil in the case of AGI), plausibility (a useful framework for assessing future outcomes from technology, as opposed to probability), and responsible innovation (a rapidly growing field of inquiry assessing the various ways in which the public and policy-makers can positively influence the social impacts of scientific and technological research).

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.