Distributed controllers for multi-agent swarms: from collaborative manipulation to articulated locomotion
Given the rapid development of affordable, complex robots with embedded sensing and computation capabilities, we are quickly approaching a point at which robotic deployments will involve hundreds of robots, or robots with tens of degrees of freedom (DOFs). However, as the number of agents (robots or DOFs) in the system grows, so does the combinatorial complexity of coordinating them. One solution to this problem is to turn to distributed approaches, where the swarm-level task can be broken into subtasks that are then assigned to different agents. In this talk, I will present distributed approaches to the problem of 1) collaboratively grasping and translating an object to a desired pose on the surface of water by a team of autonomous surface vehicles, and 2) controlling the posture of an articulated robots during locomotion over steep or unstructured terrain by distributing locomotion and stabilization to the different DOFs. Finally, I look to recent advances in distributed reinforcement learning to let multiple agents learn a homogeneous policy in a time-efficient manner, by leveraging the sum of their experiences. There, I show that the resulting collaborative policy naturally scales to arbitrary numbers of agents, while remaining near-optimal.
Guillaume Sartoretti, who earned his doctorate in 2016 from the École Polytechnique Fédérale de Lausanne, is currently a Manufacturing Futures Initiative (MFI) Postdoctoral Fellow in the Robotics Institute at Carnegie Mellon University. His research focuses on the distributed/decentralized coordination of numerous agents, at the interface between conventional control and artificial intelligence. Applications range from multi-robot systems, where independent robots need to coordinate their actions to achieve a common goal, to high-DoF articulated robots, where joints need to be carefully coupled during locomotion in rough terrain.