Designing User-, Hand-, and Handpart-Aware Tabletop Interactions
The arrival of interactive multi-touch tabletop systems heralded the development of many novel and powerful ways for people to manipulate digital content. Yet most technologies cannot sense what is causing the touch, i.e., they are unable to differentiate between the touches of two different people, or between different hands, or between different parts of the hand touching the surface. This is a lost opportunity, as this knowledge could be the basis of even more powerful interaction techniques.
Still, a few technologies do allow such differentiation, albeit in limited ways. The DiamondTouch surface identifies which person is touching, but little else. Muscle-sensing identifies fingers, and computer-vision approaches begin to infer similar information, though not robustly. The Fiduciary-Tagged Glove – which we use in our own work – is perhaps the most promising. While a wearable, it offers an inexpensive, simple, yet robust tracking method for experimental development. It serves as a good stand-in until non-encumbered technologies are realistically available. Fiduciary tags (printed labels) are glued to key handparts of the glove. When used by a person over a fiduciary tag-aware surface (e.g., the Microsoft Surface), the location and orientation of these tags can be tracked. Because the software associates tags to particular handparts, it can return precise information about one or more parts of the hand in contact with the touch surface.
The problem is that the fiduciary-tagged glove still requires low-level programming. The programmer has to track all individual tags, calculate the spatial, motion and orientation relations between tags, and infer posture and gesture information from those relationships. While certainly possible, this complexity limits the number of programmers willing to use it, and demands more development time to actually prototype rich interaction techniques. Our goal is to mitigate this problem by developing the TouchID (Touch IDentification) toolkit that provides the programmer with knowledge of the person, hand, and handpart touching the surface. Our expectation is that by making this information easily available, the programmer can rapidly develop interaction techniques that leverage this information.