When Performing Is Not Enough: Informativeness Incentives in Coevolutionary Algorithms
Coevolutionary algorithms, as population-based search heuristics, have traditionally seen application in domains for which no single metric of solution quality is known or tractable to employ. In the absence of such a yardstick, coevolutionary algorithms rely on aggregating the outcomes of interactions among many evolving entities to determine the fate of candidate solutions during search. Successful applications to interactive domains such as game strategy spaces or artificial life have motivated the study of when these algorithms succeed or fail.
Recent theoretical work towards that end emphasizes the role of testing, both broadening the scope of application and deepening understanding of how coevolutionary algorithms operate. Much of this work springs from a simple observation. When A and B play a game, for instance, the outcome of the game gives some information about A's abilities that can then be used to determine whether A is kept or discarded. However, the interaction also reveals information about B's capacity to expose such abilities. Treating this testing role seriously suggests incenting B not just to perform well at the game, but also to inform well about how others perform.
In this talk I will briefly survey evolutionary algorithms and coevolutionary algorithms in particular, including key applications that have motivated their study. I will then present results from a growing body of evidence suggesting that adding an informativeness incentive to coevolutionary algorithms not only improves the quality of solutions found, but also uncovers some of the structure of the application domain.