Recently, head pose estimation (HPE) from low-resolution surveillance data has gained in importance. However, monocular and multi-view HPE approaches still work poorly under target motion, as facial appearance distorts owing to camera perspective and scale changes when a person moves around. To this end, we propose FEGA-MTL, a novel framework based on Multi-Task Learning (MTL) for classifying the head pose of a person who moves freely in an environment monitored by multiple, large field-of-view surveillance cameras. Upon partitioning the monitored scene into a dense uniform spatial grid, FEGA-MTL simultaneously clusters grid partitions into regions with similar facial appearance, while learning region-specific head pose classifiers. In the learning phase, guided by two graphs which a-priori model the similarity among (1) grid partitions based on camera geometry and (2) head pose classes, FEGA-MTL derives the optimal scene partitioning and associated pose classifiers. Upon determining the target's position using a person tracker at test time, the corresponding region-specific classifier is invoked for HPE. The FEGA-MTL framework naturally extends to a weakly supervised setting where the target's walking direction is employed as a proxy in lieu of head orientation. Experiments confirm that FEGA-MTL significantly outperforms competing single-task and multi-task learning methods in multi-view settings.

A Multi-Task Learning Framework for Head Pose Estimation under Target Motion / Yan, Yan; Ricci, Elisa; Subramanian, Ramanathan; Liu, Gaowen; Lanz, Oswald; Sebe, Niculae. - In: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE. - ISSN 0162-8828. - 38:6(2016), pp. 1070-1083. [10.1109/TPAMI.2015.2477843]

A Multi-Task Learning Framework for Head Pose Estimation under Target Motion

Yan, Yan;Ricci, Elisa;Subramanian, Ramanathan;Liu, Gaowen;Lanz, Oswald;Sebe, Niculae
2016-01-01

Abstract

Recently, head pose estimation (HPE) from low-resolution surveillance data has gained in importance. However, monocular and multi-view HPE approaches still work poorly under target motion, as facial appearance distorts owing to camera perspective and scale changes when a person moves around. To this end, we propose FEGA-MTL, a novel framework based on Multi-Task Learning (MTL) for classifying the head pose of a person who moves freely in an environment monitored by multiple, large field-of-view surveillance cameras. Upon partitioning the monitored scene into a dense uniform spatial grid, FEGA-MTL simultaneously clusters grid partitions into regions with similar facial appearance, while learning region-specific head pose classifiers. In the learning phase, guided by two graphs which a-priori model the similarity among (1) grid partitions based on camera geometry and (2) head pose classes, FEGA-MTL derives the optimal scene partitioning and associated pose classifiers. Upon determining the target's position using a person tracker at test time, the corresponding region-specific classifier is invoked for HPE. The FEGA-MTL framework naturally extends to a weakly supervised setting where the target's walking direction is employed as a proxy in lieu of head orientation. Experiments confirm that FEGA-MTL significantly outperforms competing single-task and multi-task learning methods in multi-view settings.
2016
6
Yan, Yan; Ricci, Elisa; Subramanian, Ramanathan; Liu, Gaowen; Lanz, Oswald; Sebe, Niculae
A Multi-Task Learning Framework for Head Pose Estimation under Target Motion / Yan, Yan; Ricci, Elisa; Subramanian, Ramanathan; Liu, Gaowen; Lanz, Oswald; Sebe, Niculae. - In: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE. - ISSN 0162-8828. - 38:6(2016), pp. 1070-1083. [10.1109/TPAMI.2015.2477843]
File in questo prodotto:
File Dimensione Formato  
[S2].pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.46 MB
Formato Adobe PDF
1.46 MB Adobe PDF   Visualizza/Apri
TPAMI_2nd_revision_submit1 (2).pdf

Solo gestori archivio

Tipologia: Pre-print non referato (Non-refereed preprint)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 5.25 MB
Formato Adobe PDF
5.25 MB Adobe PDF   Visualizza/Apri
TPAMI_camera_ready.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 5.09 MB
Formato Adobe PDF
5.09 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/148071
Citazioni
  • ???jsp.display-item.citation.pmc??? 3
  • Scopus 111
  • ???jsp.display-item.citation.isi??? 89
  • OpenAlex ND
social impact