Gaze target detection aims to predict the image location where the person is looking and the probability that a gaze is out of the scene. Several works have tackled this task by regressing a gaze heatmap centered on the gaze location, however, they overlooked decoding the relationship between the people and the gazed objects. This paper proposes a Transformer-based architecture that automatically detects objects (including heads) in the scene to build associations between every head and the gazed-head/object, resulting in a comprehensive, explainable gaze analysis composed of: gaze target area, gaze pixel point, the class and the image location of the gazed-object. Upon evaluation of the in-the-wild benchmarks, our method achieves state-of-the-art results on all metrics (up to 2.91% gain in AUC, 50% reduction in gaze distance, and 9% gain in out-of-frame average precision) for gaze target detection and 11-13% improvement in average precision for the classification and the localization of the gazed-objects. The code of the proposed method is available https://github.com/francescotonini/object-aware-gaze-target-detection

Object-aware Gaze Target Detection / Tonini, Francesco; Dall'Asen, Nicola; Beyan, Cigdem; Ricci, Elisa. - (2023), pp. 21803-21812. (Intervento presentato al convegno 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023 tenutosi a Paris, France nel 2th – 6th Oct 2023) [10.1109/ICCV51070.2023.01998].

Object-aware Gaze Target Detection

Tonini, Francesco;Dall'Asen, Nicola;Beyan, Cigdem;Ricci, Elisa
2023-01-01

Abstract

Gaze target detection aims to predict the image location where the person is looking and the probability that a gaze is out of the scene. Several works have tackled this task by regressing a gaze heatmap centered on the gaze location, however, they overlooked decoding the relationship between the people and the gazed objects. This paper proposes a Transformer-based architecture that automatically detects objects (including heads) in the scene to build associations between every head and the gazed-head/object, resulting in a comprehensive, explainable gaze analysis composed of: gaze target area, gaze pixel point, the class and the image location of the gazed-object. Upon evaluation of the in-the-wild benchmarks, our method achieves state-of-the-art results on all metrics (up to 2.91% gain in AUC, 50% reduction in gaze distance, and 9% gain in out-of-frame average precision) for gaze target detection and 11-13% improvement in average precision for the classification and the localization of the gazed-objects. The code of the proposed method is available https://github.com/francescotonini/object-aware-gaze-target-detection
2023
IEEE/CVF International Conference on Computer Vision (ICCV)
Piscataway, NJ USA
IEEE
979-8-3503-0718-4
979-8-3503-0719-1
Tonini, Francesco; Dall'Asen, Nicola; Beyan, Cigdem; Ricci, Elisa
Object-aware Gaze Target Detection / Tonini, Francesco; Dall'Asen, Nicola; Beyan, Cigdem; Ricci, Elisa. - (2023), pp. 21803-21812. (Intervento presentato al convegno 2023 IEEE/CVF International Conference on Computer Vision, ICCV 2023 tenutosi a Paris, France nel 2th – 6th Oct 2023) [10.1109/ICCV51070.2023.01998].
File in questo prodotto:
File Dimensione Formato  
IC28_iccv2023_gaze_with_supp_compressed.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 900.82 kB
Formato Adobe PDF
900.82 kB Adobe PDF Visualizza/Apri
Object-aware_Gaze_Target_Detection.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 6.54 MB
Formato Adobe PDF
6.54 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/387310
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact