Distinguished Lecture: Shaping the Future of Human-Centered Technology and Human-Machine Interaction

April 29, 2009
9:15a Refreshments, Burden Lounge; 9:30-10:30a Lecture, Nelson Auditorium
Nelson Auditorium, Anderson Hall

Abstract

The newest challenge facing human-centered technology today is how human users will interact with highly complex systems to maximize both user and machine performance. Interaction with intelligent and complex machines must exploit all available perceptual channels and adapt to the changing dynamics of interaction with the user and environment. Effective and efficient interaction with human users requires access to and insight into the userís activities, emotions, and intentions. We have been developing socially assistive robot systems and testing them with human subject cohorts from a variety of user populations, including stroke patients, children with autism spectrum disorder, and elderly with Alzheimers and other forms of dementia. We found both vast differences in individual interaction patterns and styles, and statistically significant similarities across populations that can be effectively exploited for both system design and best practices in assistive care. The ability to instantaneously and wirelessly track human state, movement, and activity without external sensors opens the door for a vast spectrum of human-machine interaction studies and applications. This talk will discuss multi-modal activity sensors, algorithms for efficient processing of their data, and applications for their use in real- world scenarios, including algorithms for exploiting physiological data (galvanic skin response, heart rate) along with external perceptual signals (vision, speech) to obtain a more comprehensive model of the user, and to both respond to demands and anticipate them. The talk will also address adaptive models of the user, in order to adjust to changing interaction dynamics due to fatigue, frustration, stress, and other key factors in human performance to enable a robot system to adapt its behavior, based on multi-modal user activity tracking, so as to maximize the userís task performance over long periods of time (weeks and months rather than single or few sessions). The research into multi-modal perception and interaction, and into long-term user modeling and adaptation, will benefit both human-computer and human-robot interaction, helping in overcoming individual user differences in background, education, and training, as well as cultural and linguistic barriers to effective system use, leading toward more accessible and useful human-centered assistive technologies.