Whats Wrong with Mobile Computer Systems Research?
February 10, 1997
Written for the CHI '97 workshop on ubiquitous computing, March 23-24, 1997, Atlanta, GA
From 1993-1995 I worked on Odyssey, a mobile computing systems project at CMU involving Brian Noble, myself, our advisor (Satya), and others. The high-level scenario was of a user on a mobile machine, accessing information while moving around, switching between different types of wireless networks, occasionally "docking" into power and high-bandwidth connectivity, etc.
The key problem we addressed was "adaptation" to the changing availability of resources like network bandwidth. Coda and other systems implement adaptation entirely in the operating system by using the local cache to server requests, but we were interested in more flexible and powerful application-specific adaptation. At the same time, it seemed that some central arbiter of resource control was required for quotas, allocation, and (e.g.) network estimation. So adaptation should not be implemented by each application, but by a collaboration between applications and a resource manager. We described this approach as "application-aware adaptation".
The mechanism for applications to reduce resource use was reduction in fidelity. Reduced fidelity might mean reducing image quality or frame rate for video, reducing the resolution or eliminating features from a map, or relaxing consistency for a text file. But even different applications based on the same data type may require different trade-offs between the different kinds of fidelity: for example a movie player would drop frames to maintain real-time play-back while a movie editor which needs to access each frame might prefer to sacrifice image quality.
The Desktop Paradigm
Application-aware adaptation is hardly required in a special-purpose device, so an assumption of Odyssey was that users wanted to use lots of applications and even multitask between them, or in other words, that they wanted something near desktop functionality. Most research in mobile computing so far has been based on that model, which I call the desktop paradigm. After all, a laptop computer is a desktop system which has been squeezed into a 4-8 lb. box and which sometimes loses access to external sources of data and power.
Trying to squeeze desktop functionality out of a mobile wireless computer was and is a very interesting challenge, but from the standpoint of "why do people need this" the whole idea looks like a mistake. Laptops cant be used while moving around, so its not clear why laptops users need a wireless connection. Plausible scenarios for using laptops with wireless networks do exist, but are much narrower than the general network usage Odyssey was trying to support.
The desktop paradigm is an integral part of the expertise of computer scientists (at least of systems types like me), and we use desktop computers every day, so its tempting to keep pushing it into smaller devices that *are* used while moving around. But the desktop paradigm is very heavy-weight. System requirements include a bit-mapped display, text input, a pointing device, a disk, network access (at least via modem), a multi-application operating system, and a complex user interface. Users must sit down and concentrate to use these effectively.
The Tourist Theme
One scenario of use for Odyssey was tourism, a popular one in mobile computing research (see TNET, CyberGuide, ParcTab, CMUs Navigator). The general idea is that mobile users in a strange environment want a wireless laptop-style device feeding them a stream of multimedia information. Often the tourist needs directions to the nearest French restaurant, instructions on how to deal with an emergency, etc.
To support tourism I spent a lot of time porting GRASS, a public-domain geographical information system (GIS) designed for use by geographers, into the Odyssey framework. From both my own experience and comments in the literature, the use of GIS systems to support "tourism" has often been somewhat na´ve: for example, maps tend to be represented in raster (bitmap) formats instead of in much the more compact and more appropriate vector formats. I attribute this to lack of expertise in GIS systems.
Computer scientists like the tourist scenario because it produces lots of interesting systems issues, but in hind-sight I think the tourist scenario is not at all convincing. Tourists arent changing any information about the area they move through, and though they may be curious, up-to-date information is hardly crucial, so they dont need a wireless connection. Tourists shouldnt be walking around looking at computer screens, so why give them one. In short, why do tourists need all this technology? Consider some focused variations on the tourist scenario (none of these are my ideas).
Tourists need directions. Tourists drive around a lot. So some rental cars now include an automatic driving guide using GPS and mapping software. They include displays, but use audio output during actual driving, and no wireless connection is required. Having your car tell you how to get to the nearest French restaurant is obviously more appropriate than having your laptop tell you, and its easier to implement in a car to boot.
Museum visitors already use analog audio systems with headphones to provide guided tours. A digital headphone system with location sensors could serve a much wider area (digital audio storage is much more compact than analog) and would free the tourist from pre-determined tours. No display would be required, and the input requirements are minimal.
Mechanics are a better example of "tourists" who require ubiquitous computing (see CMUs VuMan). Airplane mechanics walk around very large objects and deal with very large collections of information (manuals) while doing expensive and safety-critical work. Mechanics need to keep their hands free, and they need some sort of display to view diagrams, so a wearable device is required. The sheer size of the repair manuals requires some sort of wireless connection. The importance of their work imposes all sorts of record-keeping requirements, so some sort of input is required as well.
In a similar vein, doctors in a hospital "repair" people by running around in a building, accessing large amounts of information, producing detailed records, etc. There are hints that this will be the first real-world success for wireless pen-based systems.
These adventures suggest that what ubiquitous computing research needs right now are experiments with narrow & focused applications. Research on broader applications runs the risk of solving the wrong problems, producing overweight systems, and being led astray by the desktop paradigm.
For example, the narrowed applications above are much more compelling than the broad tourist scenario, yet the desktop paradigm would not support any of them very well. The combination of a keyboard, a pointing device, and a bit-mapped display would be very inappropriate. And with the possible exception of the medical application, the networking and data storage requirements are very different from those of a personal computer. Even there, the Odyssey model of adaptively changing the fidelity of the data would not be appropriate.
Along similar lines, I speculate that much of the disappointment with PDAs over the last few years stems from the spectacular success of laptops based on the desktop paradigm and incompatibilities between that paradigm and good PDA applications.
However, computer scientists do not traditionally build systems with narrow uses and specialized hardware. Because these applications do not fit into the desktop paradigm, application-specific expertise from outside of computer science may be required. The software development platforms for these types of systems are very different from what computer scientists are accustomed to; operating systems like Windows CE may change that. And the effort of producing prototype hardware must be justified carefully. Simulating the expected hardware with off-the-shelf laptops or PDAs is the standard solution, but this solution will limit the realism of the research and may make it impossible to determine if the application warrants further study.
Information capture is a promising application area for ubiquitous computing, but computer scientists have paid little attention to it (see StormCast for an exception). Note that unlike information access, information capture is inherently mobile and is not part of the desktop paradigm.
Hand-held devices for information capture have actually been successful in the real world, integrated into specific vertical applications. Some insurance assessors now use pen tablets to annotate diagrams with the damage to an automobile and estimate the cost of repair. Inventory control systems based on bar-code scanners are common. Meter readers working for utilities now use hand-held computers. Most of you have "signed" for the receipt of a package on a digital signature reader. In the consumer electronics world, digital cameras with displays for previewing and view-finding are now available. And a particularly clunky example - I was interviewed a few months ago by a census worker who filled out questionnaires on an old, DOS-based laptop.
These are all rather narrow applications that have succeeded. What can we learn from them?