Bayesian models of human learning and reasoning
Bayesian methods have revolutionized major areas of artificial intelligence, machine learning, natural language processing and computer vision. Recently Bayesian approaches have also begun to take hold in cognitive science, as a principled framework for explaining how humans might learn, reason, perceive and communicate about their world. This talk will sketch some of the challenges and prospects for Bayesian models in cognitive science, and also draw some lessons for bringing probabilistic approaches to artificial intelligence closer to human-level abilities. The focus will be on learning and reasoning tasks where people routinely make successful generalizations from very sparse evidence, such as learning the meanings of words, reasoning about hidden properties of objects, and causal inference. The models we discuss will draw on -- and hopefully, offer new insights for -- several directions in contemporary machine learning, such as semi-supervised learning, modeling relational data, structure learning in graphical models, and hierarchical Bayesian modeling.