PhD Research Talk: Toward Adaptive Brain Computer Interfaces For User Non-Specific Goals
The development of automatic adaptive interfaces has been an active area of research in HCI for more than 20 years. However there is a general assumption that the user knows the exact outcome of their task (i.e. has a specific goal). It is generally considered the responsibility of the system to correctly predict this goal and aid the user in their achievement of it. This is not a realistic representation of real world tasks where users often hold a less specific mental model of what they want to achieve without knowing exactly what the result will look like (nonspecific goals) such as exploratory data analysis in information visualization or musical improvisation. This poses the harder question of how to build an adaptive system for users when they hold nonspecific goals. Brain sensing provides a unique insight into users’ cognitive state that can be used to drive such a system. In this talk we first present previous work on a BCI when users have specific goals which uses the P300 to select physical objects (MS research at UCL). We then present a model where fNIRS brain sensing can be used to develop an adaptive interface when users have nonspecific goals (current and planned PhD research at Tufts). We present and discuss applications of this model and planned future work based on current findings.