Graduate Research Talk: Learning Reasons for Success or Failure of Robot Actions
Humans have the ability to understand the reasons for actions. A glass breaks because we dropped it, a child laughs because we told a story. If a robot could figure out the reasons behind actions it would be better equipped to interact with and learn from the world. The robot would also be better able to explain its own actions to humans. This problem is complex, requiring a combination of low-level and high-level information processing. We place reason learning in context and sketch a high-level solution, as well as demonstrate our progress at solving its subproblems of blame assignment, anomaly detection, and exploration.