PhD Defense: Real-time fNIRS Brain Input for Enhancing Interactive Systems

October 24, 2011
10a-11a
Conference Room, 196 Boston Avenue, 4th Floor
Speaker: Erin Solovey, Tufts University
Host: Rob Jacob

Abstract

Please Note: Room Changed to Conference Room, 196 Boston Avenue. Most human-computer interaction (HCI) techniques cannot fully capture the richness of the user’s thoughts and intentions when interacting with a computer system. For example, when we communicate with other people, we do not simply use words, but also accompanying cues that give the other person additional insight to our thoughts. When we communicate with computers, we also generate additional signals, but the computer cannot sense such signals, and therefore ignores them. Detecting these signals in real time and incorporating them into the user interface could improve the communication channel between the computer and the human user with little additional effort required of the user. This communication improvement would lead to technology that is more supportive of the user’s changing cognitive state. Such improvements in bandwidth are increasingly valuable as technology becomes more powerful and pervasive, while our cognitive abilities do not change considerably.

In my dissertation, I explore using brain sensor data as a passive, implicit input channel that expands the bandwidth between the human and computer by providing supplemental information about the user. Using a relatively new brain imaging tool called functional near- infrared spectroscopy (fNIRS), we can detect signals within the brain that indicate various cognitive states. This device provides data on brain activity while remaining portable and non-invasive. This research aims to develop tools to make brain sensing more practical for HCI and to demonstrate effective use of this cognitive state information as supplemental input to interactive systems.

First, I explored practical considerations for using fNIRS in HCI research to determine the contexts in which fNIRS realistically could be used. Secondly, in a series of controlled experiments, I explored cognitive multitasking states that could be classified reliably from fNIRS data in offline analysis. Based on these experiments, I created "Brainput", a system that learns to identify brain activity patterns occurring during multitasking. It provides a continuous, supplemental input stream to an interactive human-robot system, which uses this information in real time to modify its behavior to better support multitasking. Finally, I conducted an experiment to investigate the efficacy of Brainput and found improvements in performance and user experience.