FingerGlass is a multitouch interaction technique. It enables users to navigate virtual scenes and translate objects therein.
Learn how to navigate through scenes, select objects and translate them using FingerGlass.
We present FingerGlass at the 2011 conference of the ACM Special Interest Group on Computer-Human Interaction on May 10 in Vancouver, BC.
We interact with our environment in many different scales. For example, creating a painting requires us to work on its global composition as well as its finest details. Similarly, software systems such as Adobe Illustrator or Google Maps operate on multiscale scenes.

Unlike the physical world, user interfaces of such systems are limited in size and resolution and therefore encompass a small range of scales. To overcome this, they provide zoom controls that redefine the current viewport. However, when a user performs a zoom, he loses context. In addition, repeatedly switching back and forth between navigation and interaction with the mouse is time-consuming.
We present FingerGlass, a technique that lets the user define a viewport using one hand. The other hand can simultaneously interact with objects in the scene. At all times, the contents of this viewport are shown twice on the screen once in a global zoomed-out view stretched out across the entire screen, retaining contextual information, and once as a magnified copy on top of the zoomed-out view. We call the latter the magnified view. Any interaction with objects in the scene takes place in the magnified view. This way, Fingertips do not occlude the area of interest in the zoomed-out view.
Left: Translation of a 2D shape in a vector graphics application.
Right: Translation of vertices in a multitouch 3D modeling system.
Multitouch workstations provide more degrees of freedom than the mouse and reduce the mental effort required for interacting with virtual objects. These advantages come at the cost of screen occlusion and reduced precision. Nonetheless, research has shown showed that touch-based devices can achieve faster task completion times and comparable error rates to a mouse, given sufficiently large targets.

The precise selection of small on-screen targets has been well-studied. However, with the recent advent of multitouch-based content creation applications such as Eden, we require tools for more complex interactions than just selection.
This is a walkthrough of FingerGlass in a trip planning application: the user would like to move the marked waypoint to a different street intersection in the same neighborhood. More details can be found in the original Master's thesis.
This is what we want to do.
Once the user defines the area of interest, the magnified view appears.
The user selects the waypoint in thte magnified view and...
...translates it.
The destination of the waypoint might lie outside the current area of interest. The third step can then be extended.
After releasing the first hand, the magnified view shrinks.
If the user taps at a location in the original viewport, FingerGlass translates the waypoint to this location.
FingerGlass applies any translation of the second hand to the waypoint on a smaller scale.
If the user specifies a new area of interest, the magnified view grows again.
We evaluated our technique in the context of selection and translation by comparing it to three popular techniques. We asked ten users to perform a total of 480 tasks. Each task required the user to select a target with a radius of 2 millimeters and to translate it various distances between 2.7 and 350 millimeters.
We found that FingerGlass significantly outperforms the other techniques: users acquired and translated targets more than 50% faster than with the second-best technique. Error rates did not differ significantly. For selection and all translation tasks for distances larger than 10 millimeters, FingerGlass was significantly more efficient.
8 out of 10 participants labeled FingerGlass as their favorite technique. It was considered roughly equally easy to learn as Shift by D. Vogel & P. Baudisch.
More information about our user study can be found in our CHI 2011 paper. Additional figures and measurements can be found in the original Master's thesis.
Top: Average time to select a target with a radius of 2 milimeters. Lower is better.
Bottom: Average time to translate a target across various distances. Lower is better.
Dominik Kaeser is a Technical Director Resident at Pixar Animation Studios. FingerGlass was his Master's thesis during his time at University of California, Berkeley and ETH Zurich in Switzerland.
Maneesh Agrawala is an Associate Professor in Electrical Engineering and Computer Science at the University of California, Berkeley. He works on visualization, computer graphics and human-computer interaction.
Mark Pauly is an Associate Professor at the School of Computer and Communication Sciences at EPFL. His research interests include computer graphics and animation, geometry processing, shape modeling and analysis, and computational geometry.
We thank Tony DeRose, Björn Hartmann, Christine Chen and Kenrick Kin for their insightful comments and continuous support.