Tony Milligan to Deliver Online Lecture on Virtue Ethics 25 September

by | 22 August 2013

This is the pre-event announcement for an online lecture by Tony Milligan, Lecturer with the Department of Philosophy at the University of Hertfordshire.

Here is the full talk info:

Virtue, Risk and Space Colonization
Wednesday 25 September 2013, 17:00 GMT (10:00 Los Angeles, 13:00 New York, 18:00 London)
To be held online via Skype. RSVP required by email to Seth Baum (seth [at] gcrinstitute.org). Space is limited.

Virtue ethics is one of the key components of contemporary ethical theory. It shifts our focus in the direction of living well and acting in line with admirable traits of character, traits such as practical wisdom. The practically wise agent will be just, courageous and a good decision-maker. They will recognize the ineradicability of our human vulnerability and will also tend to recognize which risks are most salient and are most in need of a precautionary response. However, it has been (repeatedly) suggested that plans for space colonization do not express such practical wisdom (or the associated cluster of admirable traits) but instead express an escapist unwillingness to face up to far more immediate Earthly problems. In response I will (1) concede that this often has been (and continues to be) the case. A good deal of enthusiast literature on space colonization from Tsoilkovsky to O’Neil and beyond, has always had a strong utopian and escapist dimension. However, I will also suggest (2) that that those who press such a charge of escapism too far may, nonetheless, find it difficult to avoid falling foul of their own charge. That is to say, they are in danger of a flight from genuine, albeit long-range, risks. Finally, I will argue that (3) practical wisdom in the context of space colonization involves neither the ignoring of long-range risks nor their over-prioritization but rather an acceptance that our human predicament, in the face of multiple risks which are simultaneously worthy of attention, is fundamentally (and inconveniently) dilemmatic.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.