Improving Visual SLAM for Indoor Environments

May 2, 2019
3:00 PM
Halligan 102
Speaker: Karthik Dantu, University at Buffalo
Host: Jivko Sinapov

Abstract

Sensing spatial context is important for several applications such as robot navigation and Augmented/Mixed Reality apps on mobile devices. Simultaneous Localization and Mapping (SLAM) is a popular way to sense spatial context by mapping the environment and localize the robot/mobile device in that map. Recently, SLAM using vision sensors (RGB cameras as well as depth cameras) has become popular. However, Visual SLAM has several challenges such as having to deal with large volumes of data, perceptual aliasing, and difficulty in reasoning about semi-static environments. In this talk, I will discuss work from my lab in solving some of these problems. I will briefly discuss other work from my lab on multi-UAV coordination.

Bio

Bio: Karthik Dantu is an Assistant Professor in Computer Science and Engineering at University at Buffalo. He directs the Distributed RObotics and Networked Embedded Sensing (DRONES) Lab and co-directs the Reliable Mobile Systems (RMS) Lab with Prof. Steve Ko and Prof. Lukasz Ziarek. Previously, he obtained his Masters and Ph.D. from the University of Southern California and was a Postdoctoral Fellow at Harvard. His research interests are in systems and sensing challenges in edge computing systems such as Multi-robot systems, Mobile Systems, and Internet of Things. In his spare time, he is usually thinking of American Football, good coffee, and craft beer.