DynamicRead : exploring robust gaze interaction methods for reading on handheld mobile devices under dynamic conditions
MetadataShow full item record
Enabling gaze interaction in real-time on handheld mobile devices has attracted significant attention in recent years. An increasing number of research projects have focused on sophisticated appearance-based deep learning models to enhance the precision of gaze estimation on smartphones. This inspires important research questions, including how the gaze can be used in a real-time application, and what type of gaze interaction methods are preferable under dynamic conditions in terms of both user acceptance and delivering reliable performance. To address these questions, we design four types of gaze scrolling techniques: three explicit technique based on Gaze Gesture, Dwell time, and Pursuit; and one implicit technique based on reading speed to support touch-free, page-scrolling on a reading application. We conduct a 20-participant user study under both sitting and walking settings and our results reveal that Gaze Gesture and Dwell time-based interfaces are more robust while walking and Gaze Gesture has achieved consistently good scores on usability while not causing high cognitive workload.
Lei , Y , Wang , Y , Caslin , T , Wisowaty , A , Zhu , X , Khamis , M & Ye , J 2023 , ' DynamicRead : exploring robust gaze interaction methods for reading on handheld mobile devices under dynamic conditions ' , Proceedings of the ACM on Human-Computer Interaction , vol. 7 , no. ETRA , 158 . https://doi.org/10.1145/3591127
Proceedings of the ACM on Human-Computer Interaction
Copyright © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. This work has been made available online in accordance with publisher policies or with permission. Permission for further reuse of this content should be sought from the publisher or the rights holder. This is the author created accepted manuscript following peer review and may differ slightly from the final published version. The final published version of this work is available at https://doi.org/10.1145/3591127.
DescriptionFunding: Lei, Y. and Wang, Y. acknowledge the financial support by the University of St Andrews and China Scholarship Council Joint Scholarship
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.