This paper addresses the gaze target detection problem in single images captured from the third-person perspective. We present a multimodal deep architecture to infer where a person in a scene is looking. This spatial model is trained on the head images of the person-of- interest, scene and depth maps representing rich context information. Our model, unlike several prior art, do not require supervision of the gaze angles, do not rely on head orientation information and/or location of the eyes of person-of-interest. Extensive experiments demonstrate the stronger performance of our method on multiple benchmark datasets. We also investigated several variations of our method by altering joint-learning of multimodal data. Some variations outperform a few prior art as well. First time in this paper, we inspect domain adaption for gaze target detection, and we empower our multimodal network to effectively handle the domain gap across datasets. The code of the proposed method is available at https://github.com/francescotonini/multimodal-across-domains-gaze-target-detection.

Multimodal Across Domains Gaze Target Detection / Tonini, Francesco; Beyan, Cigdem; Ricci, Elisa. - (2022), pp. 420-431. (Intervento presentato al convegno 24th ACM International Conference on Multimodal Interaction, ICMI 2022 tenutosi a Bengaluru (Bangalore), India (HYBRID) nel 7-11 November 2022) [10.1145/3536221.3556624].

Multimodal Across Domains Gaze Target Detection

Tonini, Francesco;Beyan, Cigdem;Ricci, Elisa
2022-01-01

Abstract

This paper addresses the gaze target detection problem in single images captured from the third-person perspective. We present a multimodal deep architecture to infer where a person in a scene is looking. This spatial model is trained on the head images of the person-of- interest, scene and depth maps representing rich context information. Our model, unlike several prior art, do not require supervision of the gaze angles, do not rely on head orientation information and/or location of the eyes of person-of-interest. Extensive experiments demonstrate the stronger performance of our method on multiple benchmark datasets. We also investigated several variations of our method by altering joint-learning of multimodal data. Some variations outperform a few prior art as well. First time in this paper, we inspect domain adaption for gaze target detection, and we empower our multimodal network to effectively handle the domain gap across datasets. The code of the proposed method is available at https://github.com/francescotonini/multimodal-across-domains-gaze-target-detection.
2022
The 24th ACM International Conference on Multimodal Interaction (ICMI 2022)
1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES
Association for Computing Machinery
9781450393904
Tonini, Francesco; Beyan, Cigdem; Ricci, Elisa
Multimodal Across Domains Gaze Target Detection / Tonini, Francesco; Beyan, Cigdem; Ricci, Elisa. - (2022), pp. 420-431. (Intervento presentato al convegno 24th ACM International Conference on Multimodal Interaction, ICMI 2022 tenutosi a Bengaluru (Bangalore), India (HYBRID) nel 7-11 November 2022) [10.1145/3536221.3556624].
File in questo prodotto:
File Dimensione Formato  
3556624.pdf

Solo gestori archivio

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 6.23 MB
Formato Adobe PDF
6.23 MB Adobe PDF   Visualizza/Apri
3536221.3556624.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 6.49 MB
Formato Adobe PDF
6.49 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/352682
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 5
  • OpenAlex ND
social impact