Keeping Humans in the Loop: Teaching via Feedback in Continuous Action Space Environments

December 14, 2022
2:00pm ET
Cummings 280
Speaker: Isaac Shiedlower
Host: Elaine Short

Abstract

Quals talk:

Interactive Reinforcement Learning (IntRL) allows human teachers to accelerate the learning process of Reinforcement Learning (RL) robots by providing binary good/bad feedback signals. However, IntRL has largely been limited to tasks with discrete-action spaces in which actions are relatively slow. This limits IntRL’s application to more complicated and challenging robotic tasks, the very tasks that modern RL is particularly well-suited for. We seek to bridge this gap by presenting Continuous Action-space Interactive Reinforcement learning (CAIR): the first continuous action-space IntRL algorithm that can use teacher feedback to out-perform state-of-the-art RL algorithms in those tasks. CAIR combines policies learned from the environment and the teacher into a single policy that proportionally weights the two policies based on their agreement. This allows a CAIR agent to learn a relatively stable policy despite potentially noisy or coarse teacher feedback. We validate our approach in two simulated robotics tasks with easy-to-design and understand heuristic oracle teachers. Furthermore, we validate our approach in an online human subjects study through Amazon Mechanical Turk and show CAIR out-performs the prior state-of-the-art in Interactive RL.

Please join meeting in Cummings 280.

Zoom is not available for this event; please disregard dial-in passcode included in email.