In this paper, we present an unconstrained visual gaze estimation system. The proposed method extracts the visual field of view of a person looking at a target scene in order to estimate the approximate location of interest (visual gaze). The novelty of the system is the joint use of head pose and eye location information to fine tune the visual gaze estimated by the head pose only, so that the system can be used in multiple scenarios. The improvements obtained by the proposed approach are validated using the Boston University head pose dataset, on which the standard deviation of the joint visual gaze estimation improved by 61.06% horizontally and 52.23% vertically with respect to the gaze estimation obtained by the head pose only. A user study shows the potential of the proposed system. © 2010 IEEE.
Visual Gaze Estimation by Joint Head and Eye Information
Sebe, Niculae;
2010-01-01
Abstract
In this paper, we present an unconstrained visual gaze estimation system. The proposed method extracts the visual field of view of a person looking at a target scene in order to estimate the approximate location of interest (visual gaze). The novelty of the system is the joint use of head pose and eye location information to fine tune the visual gaze estimated by the head pose only, so that the system can be used in multiple scenarios. The improvements obtained by the proposed approach are validated using the Boston University head pose dataset, on which the standard deviation of the joint visual gaze estimation improved by 61.06% horizontally and 52.23% vertically with respect to the gaze estimation obtained by the head pose only. A user study shows the potential of the proposed system. © 2010 IEEE.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



