Tim Maher Gives Lecture on Ambient Intelligence

by | 6 August 2013

On Thursday 1 August, GCRI hosted an online lecture by Tim Maher entitled ‘Ambient Intelligence: Implications for Global Environmental Change and Totalitarianism Risk’ (see the pre-lecture announcement). Maher is a recent graduate of Bard College’s M.S. program in Climate Science and Policy and a GCRI Research Assistant. The lecture is based on Maher’s M.S. thesis ‘Ambient Intelligence and Ambient Persuasive Technology: Sustainability and Threats to Autonomy’ [1]. The lecture included discussants Maurits Kaptein, Assistant Professor of Statistics and Research Methods at the University of Tilburg and founder of PersuasionAPI, and Arden Rowell, GCRI Research Associate and Associate Professor at the University of Illinois College of Law.

Maher’s lecture began with an overview of the two types of technological advancements under discussion: Ambient Intelligence (AmI) and Ambient Persuasive Technology (AmPT). AmI is the embedding of smart technology into the built environment. This technology can sense information, process information, and communicate this information with each other and with the Internet. It models individual preferences and behaviors, and maximizes an individual’s comfort by manipulating the surrounding built environment. The AmI in a home would sense, for example, preferences in temperature or lighting and manage those things automatically for the individual. Many examples of such embedding already exist or are in development [2]. AmI can help the environment by doing things like turning off appliances when we don’t need them or adjusting the thermostat for us. Additionally, AmI can effectively automate demand-side management of the electricity grid. This could provide reductions in peak load, the need for new power plants, and concurrent reductions in greenhouse gas emissions.

AmPT uses the same types of technologies to automate the persuasion of behavior. AmPT models individual decision-making processes and then manipulates the surrounding environment to increase the likelihood of individuals taking a specific choice.. One example discussed was “The Shower Calendar” [3], in which a display is projected onto the shower wall showing personal water consumption. A personalized colored dot presented in a calendar-like matrix gets smaller with consumption, thus communicating an individual’s water use compared to others’ in their household, and the limitedness of the resource. The intention behind such a technology is to provide individuals with feedback about their relative water consumption and persuade them to reduce water usage. Because they can be so ubiquitous, AmI and AmPT can reduce our environmental impact through the many big and little environmentally impacting things that we do.

The AmI and AmPT paradigms, Maher explains, are borne out of the advancement of types of enabling technologies such as sensors, microprocessors, and WiFi chips, which are getting smaller, and thus increasingly capable of being embedded directly into everyday material. The repercussions of the extensive integration of these sorts of technologies is difficult to predict. There can be large benefits to these technologies, such as helping society achieve a more sustainable use of resources, increasing economic energy efficiency (through AmI) or increasing the likelihood of behaviors that better the environment (AmPT). However, there are also huge risks, such as issues of individual and state security, privacy, totalitarianism, paternalism, and other threats to autonomy.

The current trajectory of technology suggests that the integration of AmI and AmPT into society is highly likely. How then, Maher asks, do we create policies that enable the benefits while also reducing the risks? At the heart of this is the issue of paternalism, in which the technology designers decide what is best for the users: what ambience to seek, which persuasions to make. But all technologies incorporate their designers’ values. We can’t avoid it. If technological paternalism is inevitable, then perhaps the question is not whether or not they should be developed, but how they are developed, by who, and with what set of values?

Maher proposes a solution. He suggests that in order to increase the benefits of these technologies, individuals should be able to determine what functions or operations AmI and AmPT systems serve within their life, under what circumstances they agree to allow automated ambient systems into their lives, and what information is collected and acted upon. In other words, individuals should be their own paternalists. What does this mean? This means, in theory, that individuals should be able to decide under what circumstances they agree to be ambiently persuaded, if ever, and what information is persuasively transmitted.

Online participants raised several points about this idea. To some, there remain major grey areas in the boundaries between nudging an individual toward a certain behavior (say, by highlighting a particular option with a different color in a text, or making one option the default that an individual would need to opt-out of) and actually coercing an individual. One discussant suggested the role of the Fundamental Attribution Error – a concept in social psychology in which people tend to think that they themselves are not impacted or influenced by ‘nudges’, but believe other people are. If this holds true for different types of AmI in the context of evaluating people’s ability to engage with these technologies, one concern is that many people will not manage their own AmI settings, and those that do will not fully understand how the settings are impacting their behavior.

Participants were also interested in the government’s role in private policy design and implementation. One discussant pointed out that there may be existing agencies whose purview could, or should, include this sort of monitoring. The Federal Communications Commission is one agency whose purview already includes expanding and strengthening the nation’s communication infrastructure, and which could arguably be well suited for monitoring and regulating this sort of technology. Recent policy developments relate to these issues: under Executive Order 13563, “U.S. regulators, to the extent permitted by law, to select approaches that maximize net benefits; choose the least burdensome alternative; increase public participation in the rulemaking process; design rules that are simpler and more flexible, and that provide freedom of choice; and base regulations on sound science” [4]. The idea of increased public participation in the rulemaking process and design rules that are flexible is being mandated at the executive level, which could have implications for AmI and AmPT policy.

Overall, the issue of informed consent was central to this discussion. How informed consent is accomplished, and in what way people’s autonomy is translated into transparent, flexible policies remains a complex and compelling issue for the times ahead.

A full abstract of the online lecture is available here:

One set of emerging information and communication technologies, Ambient Intelligence (AmI) and Ambient Persuasive Technology (AmPT), offers large possible benefits toward encouraging a more sustainable use of resources. Ambient Intelligence can increase economic and energy efficiency by automating the built environment. Ambient Persuasive Technology can help increase the likelihood of behaviors that better the environment. However, these technologies come with great risk. In this presentation, I analyze this risk, focusing on issues of individual and state security, privacy, and totalitarianism, and conclude that the majority of this risk can be distilled down to the threat of paternalism and other threats to autonomy. I then analyze the ethics of AmI and AmPT paternalism and potential threats to autonomy using Kantian deontological and Millian consequentialist frameworks. Assuming this set of ethical views is correct, this ethical investigation concludes that the only moral way to implement these technologies is to give individual users the ability to control their relationship with AmI and AmPT. In effect, users should be their own paternalists. I conclude with discussions and policy recommendations for improving data security, enabling users to be more involved in the design process of AmI and AmPT, and enabling users greater direct control over their own levels of consent for each AmI and AmPT operation.

The presentation was hosted online via Skype, with slides shown on PowerPoint. The attendees included Gautam Sethi, Associate Professor of Economics at the Bard College Center for Environmental Policy, Miles Brundage, PhD student at Arizona State University’s Human and Social Dimensions of Science and Technology, and GCRI’s Seth Baum, Tony BarrettKaitlin Butler, Grant Wilson, Mark Fusco, and Robert de Neufville.

Thanks to Maurits Kaptein, Assistant Professor of Statistics and Research Methods at the University of Tilburg and founder of PersuasionAPI, and Arden Rowell, GCRI Research Associate and Associate Professor at the University of Illinois College of Law for their role as discussants in this lecture.

[1] Maher, Tim (May 2013). Ambient Intelligence and Ambient Persuasive Technology: Sustainability and Threats to Autonomy. Bard Center for Environmental Policy: Annandale on Hudson, NY.

[2] Wasik, Bill (2013). Welcome to the Programmable World. Wired Magazine.

[3] Laschke, Matthias, et al. (2011). “With a little help from a friend: a shower calendar to save water.” CHI’11 Extended Abstracts on Human Factors in Computing Systems. ACM.

[4] Sunstein, Cass (2012). Reducing Red Tape: Regulatory Reform Goes International. The Office of Management and Budget, White House.

Author

Recent Publications

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.

Climate Change, Uncertainty, and Global Catastrophic Risk

Climate Change, Uncertainty, and Global Catastrophic Risk

Is climate change a global catastrophic risk? This paper, published in the journal Futures, addresses the question by examining the definition of global catastrophic risk and by comparing climate change to another severe global risk, nuclear winter. The paper concludes that yes, climate change is a global catastrophic risk, and potentially a significant one.

Assessing the Risk of Takeover Catastrophe from Large Language Models

Assessing the Risk of Takeover Catastrophe from Large Language Models

For over 50 years, experts have worried about the risk of AI taking over the world and killing everyone. The concern had always been about hypothetical future AI systems—until recent LLMs emerged. This paper, published in the journal Risk Analysis, assesses how close LLMs are to having the capabilities needed to cause takeover catastrophe.

On the Intrinsic Value of Diversity

On the Intrinsic Value of Diversity

Diversity is a major ethics concept, but it is remarkably understudied. This paper, published in the journal Inquiry, presents a foundational study of the ethics of diversity. It adapts ideas about biodiversity and sociodiversity to the overall category of diversity. It also presents three new thought experiments, with implications for AI ethics.