Quals research talk: Planning with and inferring moral and social norms with temporal logic

November 6, 2017
3:00pm
Halligan 209
Speaker: Daniel Kasenberg, Tufts University
Host: Matthias Scheutz

Abstract

Robots and other artificial agents are increasingly being considered for domains requiring complex decision-making and interactions with humans. These artificial agents will need the ability obey human moral and social norms, even though these norms often conflict. These agents will also need the ability to learn such norms, both by instruction (in natural language) and by observing the behaviors of other agents.

Inverse reinforcement learning (IRL) (observing agent behaviors and inferring a reward function that ``explains'' those behaviors) is often touted as a solution to the norm learning problem. We argue that IRL is inadequate for the task, since it is (1) incapable of learning temporally complex norms, and (2) not easily interpretable. To address these problems, we substitute for the reward function a set of statements in linear temporal logic (LTL). We propose algorithms for maximally satisfying a set of LTL norms (even when they conflict). We also propose an approach for inferring LTL norms from observed behavior, analogous to IRL.