Graduate Research Talk: One-Shot Instruction Based Action Learning in a Cognitive Robotic Architecture

October 26, 2018
1:00 PM
Halligan 209
Speaker: Tyler Frasca
Host: Matthias Scheutz

Abstract

An enabling characteristic of humans is that we aren't limited to a fixed set of preprogrammed tasks. We quickly learn new tasks through language, observations, and other forms of communication. Once we understand the essential aspects of a task, we are then able to extend and modify it. Not only should artificial agents be able to learn and extend their knowledge in a similar manner, but hey should do it with minimal effort from the person teaching them. There has been extensive research into teaching robots new actions, but most approaches rely on large datasets. However, it may be infeasible to be able to provide enough data while deployed in the real world. Therefore, my research thus far has focused on how to teach robots to perform new actions in a single set of spoken instructions. In this presentation, I will discuss the importance of being able to teach robots through natural language, the requirements for a robot to be able to learn through natural language, and then show how I leverage the linguistic and reasoning capabilities of a cognitive robotic architecture to implement one-shot action learning. Finally, I will discuss how I plan to extend my research to minimize the effort required and make it as natural as possible for people to teach robots new tasks and skills.