Control Input and Natural Gaze for Goal Prediction in Shared Control

October 6, 2022
3:00-4:15pm ET
Cummings 270, Zoom
Host: Elaine Short

Abstract

Shared control systems can make complex teleoperation tasks easier for users. These systems predict the user's goal, determine the motion required for the robot to reach that goal, and combine that motion with the user's input. This assistance method requires an accurate understanding of the user's goals. One common approach derives this goal prediction from the user’s control input itself; we show that this approach's effectiveness for goal prediction is a consequence of how optimal users provide control input. When the user’s control input is restricted, however, the assistance resulting from using only control input for goal prediction may be suboptimal. For that case, we introduce another source for goal information: natural gaze. People’s natural, unconstrained eye gaze behavior reveals information about their immediate goals and their future tasks. We show that these two signals, control input and eye gaze, complement each other for goal prediction during shared control. Control input gives local information about the user’s goal, making it particularly effective in simple tasks when people can act optimally but limiting its performance in more complex tasks. On the other hand, eye gaze provides global information about task intentions early, but it does not do so as reliably. To demonstrate this complementarity, we first formalize evaluation criteria for goal prediction sources and examine how goal prediction using control input affects the assistance. Next, we collect data on people's natural gaze behaviors while controlling a robot and show that gaze enables early goal predictions. Finally, we implement a novel shared control system that combines natural eye gaze with joystick input to predict people’s goals online, and we evaluate our system in a real-world, COVID- safe user study. We find that modal control reduces the efficiency of assistance according to our model, and when gaze provides a prediction earlier in the task, the system’s performance improves. However, gaze on its own is unreliable and assistance using only gaze performs poorly. We conclude that control input and natural gaze serve different and complementary roles in goal prediction, and using them together leads to improved assistance performance.

Bio:

Reuben Aronson is a postdoc with Prof. Elaine Short at the AABL Lab at Tufts working on mutually assistive robotics. He received his Ph. D. in August from the Robotics Institute at Carnegie Mellon University where he was advised by Prof. Henny Admoni and worked on using eye gaze for shared control of assistive robot manipulators. Prior to that, he worked at the Naval Research Lab in DC and received a B.S. in mechanical engineering from MIT.

Please join meeting in Cummings 270 or via Zoom.

Join Zoom Meeting: https://tufts.zoom.us/j/96038251227

Meeting ID: 960 3825 1227

Passcode: see colloquium email

Dial by your location: +1 646 558 8656 US (New York)

Meeting ID: 960 3825 1227

Passcode: see colloquium email