A Robot in Care Space: A Case for Ethics
Abstract
Tuesday’s teaching demo will delve into the case of designing an eldercare robot. We will explore the interests, values, and norms that such a design could challenge or uphold, and we will generate critical perspectives on how technical decisions about interactive systems (including chatbots) could respond to pressing needs and standards of care. Our collaboration will wrap up by asking, in light of our discussion, what ethics could mean for broader technical areas of computer science.
Bio:
Thomas Arnold is a research associate at Tufts HRILab, whose work focuses on moral and social norms in human-robot interaction. His articles have addressed standards of explanation, verification, and supererogation in robotic decision-making, as well as the limits of moral dilemmas for their evaluation. He has helped design and teach “Ethics for AI, Robotics, and Human-Robot Interaction” in the Tufts computer science department, and he represents HRILab in its partner role in the PartnershipAI (PAI). In 2019 he served as co-organizer of “Coding Caring,” one of two studies commissioned by the One Hundred Year Study on Artificial Intelligence (AI100), and he is currently researching the normative demands of care contexts on explicit reasoning and instructions for robots. The co-author of “Ethics for Psychologists: A Casebook Approach” (Sage 2011), he is completing a doctoral dissertation in Harvard’s Committee on the Study of Religion on appeals to experience in the philosophy of religion.