A Framework for Transferring Interactive Behavioral Knowledge Across Different Robots

November 19, 2020
12:00-1:00 pm ET
Sococo Halligan 209; Zoom
Speaker: Gyan Tatiya
Host: Jivko Sinapov

Abstract

Abstract: Humans use exploratory behaviors coupled with multi-modal perception to learn about the objects around them. Research in robotics has shown that robots too can perform such behaviors (e.g., grasping, pushing, shaking) to infer object properties that cannot always be detected using visual input alone. However, such learned representations are specific to each individual robot and cannot be directly transferred to another robot with different actions, sensors, and morphology. Therefore, each robot needs to learn its task-specific sensory models from scratch, which is an expensive process. To address this challenge, in this talk, I will present two frameworks for knowledge transfer from more experienced source robot(s) to a less experienced target robot.

The first framework is based on encoder-decoder networks that transfer knowledge across different behaviors and modalities that enable a source robot to transfer knowledge about objects to a target robot that has never interacted with them. We evaluate this framework on a category recognition task using a dataset containing 9 robot behaviors performed multiple times on a set of 100 objects. We showed that this framework can enable a target robot to perform category recognition on a set of novel object categories without the need to physically interact with the objects to learn the categorization model. The second framework deals with a more complex scenario of transferring knowledge from multiple source robots to a target robot. This framework is based on kernel manifold alignment (KEMA) that enables source robots to transfer haptic knowledge about objects to a target robot by learning a common latent space produced by respective sensory data while interacting with objects. To test this framework, we used a dataset in which 3 simulated robots interacted with 25 objects. We showed that this framework speeds up haptic object recognition and allows novel object recognition. Overall, this research enables robots to learn interactive perception tasks from other robots, which will facilitate the deployment of multi-sensory interactive perception models in a variety of robotics applications.

Join meeting in Sococo VH 209. Login:tuftscs.sococo.com

Join Zoom Meeting: https://tufts.zoom.us/j/98610939077 PASSWORD: See colloquium email

Dial by your location: +1 646 558 8656 US (New York)

Meeting ID: 986 1093 9077

Passcode: See colloquium email

Find your local number: https://tufts.zoom.us/u/adS4koag4r