Personalized Face and Gesture Analysis for Innovations in Education and Rehabilitation
Abstract
AI opens a new world of possibilities for computer systems to look at people and interpret their motions. In this talk, I discuss my team's research in AI, in particular, computer vision and human computer interfaces, that facilitates advances in education and rehabilitation. One of our projects is to predict the learning outcomes of students from a stream of video capturing the students faces as they work with a virtual tutoring system on a set of math problems. Analyzing the subtle difference in facial behavior with neural networks, we can predict the success or failure of a student’s attempt to answer a question while the student has just begun to work on the problem. Equipping a tutoring system with the ability to interpret affective signals from students has the potential to improve the learning experience of students by providing timely interventions, including appropriate affective reactions via the virtual tutor. I will also discuss related projects -- how facial expressivity can be analyzed with context-sensitive classifiers, how gestures can be recognized early, when they are only partially observed by the computer vision system, and how depth cameras can be used to support physical exercising at home.