Volume III-2
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-2, 113–120, 2016
https://doi.org/10.5194/isprs-annals-III-2-113-2016
© Author(s) 2016. This work is distributed under
the Creative Commons Attribution 3.0 License.
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-2, 113–120, 2016
https://doi.org/10.5194/isprs-annals-III-2-113-2016
© Author(s) 2016. This work is distributed under
the Creative Commons Attribution 3.0 License.

  02 Jun 2016

02 Jun 2016

GAZE AND FEET AS ADDITIONAL INPUT MODALITIES FOR INTERACTING WITH GEOSPATIAL INTERFACES

A. Çöltekin1, J. Hempel2, A. Brychtova1, I. Giannopoulos3, S. Stellmach4, and R. Dachselt5 A. Çöltekin et al.
  • 1Department of Geography, University of Zurich, Switzerland
  • 2HMI specification at IAV GmbH, Berlin, Germany
  • 3Institute of Cartography and Geoinformation, ETH Zurich, Switzerland
  • 4Microsoft, Seattle, USA
  • 5Interactive Media Lab, Technische Universität Dresden, Germany

Keywords: Interfaces, User Interfaces, Multimodal Input, Foot Interaction, Gaze Interaction, GIS, Usability

Abstract. Geographic Information Systems (GIS) are complex software environments and we often work with multiple tasks and multiple displays when we work with GIS. However, user input is still limited to mouse and keyboard in most workplace settings. In this project, we demonstrate how the use of gaze and feet as additional input modalities can overcome time-consuming and annoying mode switches between frequently performed tasks. In an iterative design process, we developed gaze- and foot-based methods for zooming and panning of map visualizations. We first collected appropriate gestures in a preliminary user study with a small group of experts, and designed two interaction concepts based on their input. After the implementation, we evaluated the two concepts comparatively in another user study to identify strengths and shortcomings in both. We found that continuous foot input combined with implicit gaze input is promising for supportive tasks.