Investigating Methods for Assessing Human Cognitive Workload for Assistive Robots
Assessing the cognitive workload of human interactants in mixed- initiative teams is a critical capability for autonomous interactive systems to enable adaptations that improve team performance. Yet, it is still unclear which sensing modality might work best for the prediction of human workload. In this study, we analyzed and modeled data obtained from a multi-modal simulated driving setup specifically designed to evaluate different levels of cognitive workload induced by various secondary tasks such as dialogue interactions and braking events in addition to the primary driving task. We have examined the effectiveness of multiple signal types such as electroencephalography (EEG), electrocardiography (ECG), eye gaze, and arterial blood pressure for estimating human cognitive workload. Our analyses provide evidence for eye gaze being the best physiological indicator of human cognitive workload, even when multiple signals are combined. Our findings are significant for the future efforts of real-time cognitive workload assessment in the multimodal human-robot interactive settings given that eye gaze is easy to collect and process and less susceptible to noise artifacts compared to other physiological signal modalities.
Research area: Human-Robot Interaction