Interacting Beyond the Boundaries of Large Displays
Many information spaces (e.g., detailed maps) exceed the limits of even wall-sized displays. The display is thus a viewport into these spaces and information residing offscreen can be brought on-screen (i.e., inside the viewport) through navigation. Users commonly do this by moving the information space (and its digital representation respectively) through either mouse and keyboard, personal devices (e.g., tablets or phones), touch input, or mid-air interaction to interact from afar on larger displays. Although users may be able to imagine the information extending into off-screen space, the physical dimensions of a display often determine the available input space: touch is performed directly on-screen; and mid-air pointing commonly uses ray casting onto the display's surface. More traditional input methods (e.g., a mouse pointer) are also bound to the screen, as users have to keep track of cursors. As a consequence, the input space is usually tied to the display containing the visual output and is thus much smaller than the actual information space that users interact with.
Off-Limits allows the use of midair interaction within an input space that extends beyond display boundaries, thereby enabling a much larger input space than previous techniques. In doing so, we match the input space to a much larger part of the information space instead of tying it to the space within the display's boundaries. With Off-Limits, people can use their knowledge of spatial relations in the presented information. For example, locations on a map have certain distances and orientations relative to each other. Despite being out of view, points of interest can be addressed in off-screen space directly, using the parts of the information space that are visible on the display as reference.
The main benefit of Off-Limits is that it frees the input space from the physical limitations of a display. It extends two common operations: (1) it allows for addressing a point of interest (of which users know the spatial location) directly in off-screen space, without using repeated on-screen dragging operations (i.e., clutching); and (2) it allows for starting and/or continuing dragging operations in off-screen space (i.e., beyond the display's border), without interrupting interaction when the display's borders are reached. Further, Off-Limits can be implemented to allow for bi-manual operation (similar to bi-manual Multipoint , yet in offscreen space), where users may address two off-screen areas simultaneously (e.g., to perform on-screen comparison).
In this paper, we contribute three experiments that help develop and evaluate this concept on large displays: the first experiment demonstrates that off-screen space is suitable (and complementary) for interacting with large displays. In the second study, we assess users' accuracy in pointing to locations in off-screen space, leading to a model for estimating the perceived location of a point in off-screen space based on the points' distance from the display's center. With this model we refine the naïve adaptation of Off-Limits. In the third experiment, we demonstrate Off-Limit's superior performance compared to the naïve implementation regarding interaction time, number of interactions and user satisfaction. Our improvements make Off-Limits a compelling candidate for future large display interactions.