This project seeks to invent and implement a new model, abstraction, and language to provide a formal basis for describing and building next-generation user interfaces, which will involve parallel, continuous user-computer interactions. The state of practice in user-computer interfaces today is the familiar direct manipulation, GUI (graphical user interface), or WIMP (window, icon, menu, pointer) style interface. The next generation of emerging user-computer interaction styles has been called non-WIMP and is typified by parallel and continuous interactions; virtual reality or virtual environments are a prime example. This next generation is not well served by the current generation of event-based software, languages, methods, and tools. A new abstraction and language for describing and implementing these interfaces from the point of view of the user and the dialogue, rather than from the exigencies of the implementation is needed.
The project goal is to develop a model and abstraction that captures the formal structure of next-generation dialogues in the way that existing techniques have captured command-based, textual, and event-based dialogues. Most current user interface description languages and software systems are based on serial, discrete, event-based interaction. Most of today's examples of non-WIMP interfaces have, of necessity, been designed and implemented with event-based models more suited to previous interface styles. Because those models fail to capture continuous, parallel interaction explicitly, the interfaces have required considerable ad-hoc, low-level programming approaches. While some of these are very inventive, they have made such systems difficult to develop, reuse, and maintain.
This project is divided into three phases: The first phase will investigate the nature of non-WIMP interactions and develop a new language and model for describing them, with emphasis on the combination of continuous plus discrete (token-based), parallel user-computer interactions. A key component throughout this work is evaluation, to determine whether the work is satisfying its objectives. The second phase will therefore implement a testbed to measure usefulness and performance for a time-critical application such as virtual reality. The third phase will attempt to extend the model to incorporate and/or interface with other aspects of a non-WIMP system, such as simulation, physical modeling, tighter coupling of formerly semantic-level operations, and, possibly, the use of agents. If successful, this work would ultimately be used by people who design and build user interfaces in virtual reality and other non-WIMP interaction styles.
My graduate students:
And Linda Sibert and James Templeman of NRL for helpful discussions