Computer Science 250-AFF
Daniel McDuff talk


Location

Halligan 209, not our usual room in Eaton

Overview

Emotions play a huge role in our everyday lives. They influence memory, decision-making and well-being. It is now possible to quantify emotional responses on a large scale using webcams and wearable devices in everyday environments. I will present state-of-the-art work on unobtrusive measurement of facial expressions and physiology and insights from analysis from the world s largest dataset of naturalistic emotional responses.

Biography

Daniel McDuff is building and utilizing scalable computer vision and machine learning tools to enable the automated recognition and analysis of emotions and physiology. He is currently Director of Research at Affectiva and a post-doctoral research affiliate at the MIT Media Lab. At Affectiva Daniel is building state-of-the-art facial expression recognition software and leading analysis of the world's largest database of human emotion responses. Daniel completed his PhD in the Affective Computing Group at the MIT Media Lab in 2014 and has a B.A. and Masters from Cambridge University. His work has received nominations and awards from Popular Science magazine as one of the top inventions in 2011, South-by-South-West Interactive (SXSWi), The Webby Awards, ESOMAR, the Center for Integrated Medicine and Innovative Technology (CIMIT) and several IEEE conferences. His work has been reported in many publications including The Times, the New York Times, The Wall Street Journal, BBC News, New Scientist and Forbes magazine. Daniel has been named a 2015 WIRED Innovation Fellow. Two of his papers were recently recognized within the list of the most influential articles to appear in the Transactions on Affective Computing.

Info

Director of Research, Affectiva
Research Affiliate, MIT Media Lab
Web: http://alumni.media.mit.edu/~djmcduff/
Email: djmcduff@media.mit.edu