Highly accurate gaze estimation using a consumer RGB-depth sensor
MetadataShow full item record
Determining the direction in which a person is looking is an important problem in a wide range of HCI applications. In this paper we describe a highly accurate algorithm that performs gaze estimation using an affordable and widely available device such as Kinect. The method we propose starts by performing accurate head pose estimation achieved by fitting a person specific morphable model of the face to depth data. The ordinarily competing requirements of high accuracy and high speed are met concurrently by formulating the fitting objective function as a combination of terms which excel either in accurate or fast fitting, and then by adaptively adjusting their relative contributions throughout fitting. Following pose estimation, pose normalization is done by re-rendering the fitted model as a frontal face. Finally gaze estimates are obtained through regression from the appearance of the eyes in synthetic, normalized images. Using EYEDIAP, the standard public dataset for the evaluation of gaze estimation algorithms from RGB-D data, we demonstrate that our method greatly outperforms the state of the art.
Ghiass , R & Arandelovic , O 2016 , Highly accurate gaze estimation using a consumer RGB-depth sensor . in S Kambhampati (ed.) , Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence : New York City, USA, 9–15 July 2016 . AAAI Press , Palo Alto , pp. 3368-3374 , 25th International Joint Conference on Artificial Intelligence , New York , United States , 9/07/16 .conference
Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence
© 2016, IJCAI Organization/ijcai.org. This work is made available online in accordance with the publisher’s policies. This is the author created, accepted version manuscript following peer review and may differ slightly from the final published version.
Items in the St Andrews Research Repository are protected by copyright, with all rights reserved, unless otherwise indicated.