Robot Learning from non-experts: Learning from noisy evaluative feedback and keyframe-based demonstrations

November 11, 2021
10:00 am ET
Speaker: Hang Yu
Host: Elaine Short


Quals talk:

Learning complex manipulation tasks, especially robot control tasks, for model-free agents is a challenging problem. Although Reinforcement Learning (RL) has shown the ability to solve hard problems in tasks with well-defined reward functions, the data needs are not affordable for most robotic tasks. Thus, we focus on the problem of learning from non-expert humans with affordable data needs. A natural way of learning from humans is asking evaluative feedback on taken actions. We claimed that, instead of using binary feedback, scaled feedback contains more valuable information that can be utilized to speed up learning. We present a taxonomy that quantifies dynamics in human feedback, and present STEADY, a learning framework that effectively learns from noisy scaled feedback and enables the learning agent to be aware of feedback dynamics. While receiving feedback to evaluate robots’ behaviors, we are also working on speeding up robotic learning in tasks where the exploration problem is complicated using keyframe-based demonstrations. Our key insights are demonstrations are: 1) demonstrations are in different qualities and should be evaluated before learning; 2) Using genetic algorithms autogenerates demonstrations and reduces the problem of learning from multiple solutions/persons.

Please join meeting via Zoom.

Join Zoom Meeting:

Meeting ID: 971 8312 0811

Password: See colloquium email

Dial by your location: +1 646 558 8656 US (New York)

Meeting ID: 971 8312 0811

Passcode: See colloquium email