As we move from homogeneous desktop systems to a wider variety of computing devices that vary in their connectivity, processing power, context of use and input/output modalities, a key problem is to provide users a wide range of different access mechanisms and interaction styles to the same underlying data and functionality. We seek to address the software structures and underlying technology needed to provide interchangeable or adaptable user interface front ends that access a standard application back end, without custom-building each new interface variant from scratch.
Our approach is based on extending the existing notions of dialogue independence and user interface management systems (UIMS). Dialogue independence refers to the separation of user interface-related code from the rest of the application code. With this approach, the code for the "semantic level" of the system is separated from the code for the "syntactic" and "lexical" levels through a well-defined software interface. It therefore supports the development of alternative user interfaces for the same application (semantics). Traditionally, this facilitates iterative refinement of the interface through prototyping and testing. More important for our purposes, it will now provide a basis for automatically providing alternative interfaces for different devices and platforms without modifying the semantic modules.
For this project, our goal is to provide a development environment which generates alternative user interfaces for different situations and contexts of use from the same high level description. In order to accommodate user interaction in a variety of computed environments we intend to support the generation of alternative WIMP interfaces as well as tangible user interfaces and voice interface. Our ultimate goal is to provide a retargetable user interface syntax module that dynamically determines the user's current situation based on available input/output devices, connectivity, proximity of the user to the interaction devices etc. and provides an appropriately tailored user interface on the fly.
To accomplish this, we will need a language for describing user interfaces at a very high level. Rather than a specific interface details such as display size and input device, such a language would capture higher-level properties of the interface dialogue, and thereby specify a schema or family of interfaces, which can be realized in a variety of specific forms and interaction styles depending on the user's current context. Initial plans call for the language itself to be based on XML, as will the specific user interface instances that it generates on demand. As an example, our previous work on a user interface specification language for "non-WIMP" interfaces used an SGML-based approach as an intermediate language with good results. In addition we will need a set of lexical modules to support the rendering and implementation of the alternative UIs.
We began this work by identifying the set of constructs sufficient to describe a large variety of tangible user interfaces and by building a prototype TUIMS (Tangible User Interface Management System) which allow developers to specify a tangible user interface using a high level description language and simulate the tangible interaction in a Java 3D based virtual reality environment. We are currently developing a lexical module which will enable the prototyping of the described tangible user interface using a Handy Board microcontroller. In addition we develop several alternative user interfaces for a single example application. First, we will use them to demonstrate the principle of dialogue independence and to show how the different interfaces can be interchanged at will. Then, we will use our experience with them to work toward a more general mechanism for generating alternative UIs, without the need to specify the different user interfaces explicitly, but rather by producing a single, more abstract, high level specification of a family of interfaces.