Institute for Web Science and Technologies · Universität Koblenz - Landau
Institute WeST

The eye tracking group of Institute WeST has two papers accepted at the premier international conference of Human-Computer Interaction ACM CHI 2020

[zur Übersicht]

We are delighted to announce that our papers “Leveraging Error Correction in Voice-based Text Entry by Talk-and-Gaze” and “TAGSwipe: Touch Assisted Gaze Swipe for Text Entry” have been accepted at ACM CHI 2020 Conference on Human Factors in Computing Systems. From April 25th to 30th 2020, CHI will take place at the Hawaiʻi Convention Center on the island of Oahu, Hawaiʻi, USA. WeST will be represented at CHI by PhD researchers Ramin Hedeshy, Korok Sengupta, Raphael Menges, and PostDoc researcher Dr. Chandan Kumar. The details of our papers are as following.

Leveraging Error Correction in Voice-based Text Entry by Talk-and-Gaze Sengupta, K., Bhattarai, S., Sarcar, S., MacKenzie, I. S., & Staab, S.

We present the design and evaluation of Talk-and-Gaze (TaG), a method for selecting and correcting errors with voice and gaze. TaG uses eye gaze to overcome the inability of voice-only systems to provide spatial information. The user’s point of gaze is used to select an erroneous word either by dwelling on the word for 800 ms (D-TaG) or by uttering a “select” voice command (V-TaG). A user study with 12 participants compared D-TaG, V-TaG, and a voice-only method for selecting and correcting words. Corrections were performed more than 20% faster with D-TaG compared to the V-TaG or voice-only methods. As well, D-TaG was observed to require 24% less selection effort than V-TaG and 11% less selection effort than voice-only error correction. D-TaG was well received in a subjective assessment with 66% of users choosing it as their preferred choice for error correction in voice-based text entry.

TAGSwipe: Touch Assisted Gaze Swipe for Text Entry Kumar, C., Hedeshy, R., MacKenzie, I. S., & Staab, S.

The conventional dwell-based methods for text entry by gaze are typically slow and uncomfortable. A swipe-based method that maps gaze paths into words offers an alternative. However, it requires the user to explicitly indicate the beginning and ending of a word, which is typically achieved by tedious gaze-only selection. This paper introduces TAGSwipe, a bimodal text input method that combines the simplicity of touch with the speed of gaze swiping through a word. The result is an efficient and comfortable dwell-free text entry method. A lab study found TAGSwipe significantly outperformed conventional swipe-based and dwell-based methods in efficacy and user satisfaction. Additionally, a small scale study confirmed that TAGSwipe is effective in combining touch and gaze for text entry, compared to a standard multimodal method using gaze and touch for selecting each letter.

Furthermore, our ACM TOCHI journal paper “Improving User Experience of Eye Tracking-Based Interaction” will also be presented at CHI 2020 in the journal track:

Improving User Experience of Eye Tracking-Based Interaction: Introspecting and Adapting Interfaces Menges, R., Kumar, C., & Staab, S.

Eye tracking systems have greatly improved in recent years, being a viable and affordable option as digital communication channel, especially for people lacking fine motor skills. Using eye tracking as an input method is challenging due to accuracy and ambiguity issues, and therefore research in eye gaze interaction is mainly focused on better pointing and typing methods. However, these methods eventually need to be assimilated to enable users to control application interfaces. A common approach to employ eye tracking for controlling application interfaces is to emulate mouse and keyboard functionality. We argue that the emulation approach incurs unnecessary interaction and visual overhead for users, aggravating the entire experience of gaze-based computer access. We discuss how the knowledge about the interface semantics can help reducing the interaction and visual overhead to improve the user experience. Thus, we propose the efficient introspection of interfaces to retrieve the interface semantics and adapt the interaction with eye gaze. We have developed a Web browser, GazeTheWeb, that introspects Web page interfaces and adapts both the browser interface and the interaction elements on Web pages for gaze input. In a summative lab study with 20 participants, GazeTheWeb allowed the participants to accomplish information search and browsing tasks significantly faster than an emulation approach. Additional feasibility tests of GazeTheWeb in lab and home environment showcase its effectiveness in accomplishing daily Web browsing activities and adapting large variety of modern Web pages to suffice the interaction for people with motor impairment.

The journal publication is already accessible at: https://doi.org/10.1145/3338844


17.12.19

Kontakt für Rückfragen: west@uni-koblenz.de