How to Build a Machine that Understands
I sketch a theory of what meaning and understanding are and a project based on this theory to create a program that understands.
I define understanding a domain to mean that you are able to retrieve or rapidly generate computer code that analyzes and solves most new problems as they arise in it. This contrasts with the abilities of most programs we write, which are crafted to deal with problems already seen. Understanding requires a powerful library of modules that have meaning in the sense that they exploit underlying structure of the domain, and methods to rapidly compose them into new solutions.
Evolution discovered meaningful modules. In fact, that's why its so effective. As evolution discovers meaning, a high fraction of random mutations become meaningful-- have functional consequences, such as reshaping biological systems in functional ways. So evolution essentially understands.
Human thought builds on the discoveries of evolution. We build arrangements of meaningful modules looking for the right one to solve a new problem. Because we search only over meaningful possibilities, the search is short and we can rapidly produce code to solve many new problems that arise in the world.
Attempts at designing Artificial General Intelligence (AGI) have not come to grips with the nature of understanding. People do not have introspective access to the internals of the meaningful modules: we perceive only at the meaning level and have no access to the levels that support it. Designing the underlying modules is too hard a task for unaided humans. Automatically creating the whole understanding program would require competing with Evolution, which had vastly greater computational resources than we will ever have.
I propose Artificial Genie, a system that will provide an environment where humans collaborate with automatic systems on the development of robust computer programs. Using this I propose to build a program that understands interesting domains. Novel aspects of Artificial Genie support mental imagery in a way that makes concrete how modules and agents exploit underlying structure of the physical and mathematical world and that naturally supports communication between agents; a planning system that utilizes the mental image to follow only causally relevant possibilities, reproducing introspection in cases considered to date; scaffolds that greatly speed construction of new, meaningful programs and that capture insights from evolution; economic frameworks able to motivate efficient assembly of programs and scaffolds; and natural incorporation of each of these things in module creation and program assembly.
Bio: Eric B. Baum is currently developing software that collaborates with people in the construction of robust programs that understand problem domains. He is the author of "What is Thought?" (MIT Press, 2004), which proposed a fundamental theory of how the mind works, what understanding is, and why previous approaches to artificial intelligence have failed to generate systems that understand in a human-like fashion. He has spent the last 5 years extending these ideas, working out the practical methods to produce cognitive programs that form the basis for his current effort, ArtificialGenie. Eric has held positions at the University of California at Berkeley, Caltech, MIT, Princeton, and the NEC Research Institute, where he published papers in computational learning theory, machine learning, machine reasoning, evolutionary programming, neural networks, theoretical physics, and DNA computing, including results such as the Hayek machine, "What size net gives valid generalization?", and the Baum-Hawking mechanism for the vanishing of the cosmological constant. He holds a BA and an MA from Harvard and a PhD in physics from Princeton. Eric also serves on the Board of Directors of Netrics.