Head pose classification from surveillance images acquired with distant, large field-of-view cameras is difficult as faces are captured at low-resolution and have a blurred appearance. Domain adaptation approaches are useful for transferring knowledge from the training (source) to the test (target) data when they have different attributes, minimizing target data labeling efforts in the process. This paper examines the use of transfer learning for efficient multi-view head pose classification with minimal target training data under three challenging situations: (i) where the range of head poses in the source and target images is different, (ii) where source images capture a stationary person while target images capture a moving person whose facial appearance varies under motion due to changing perspective, scale and (iii) a combination of (i) and (ii). On the whole, the presented methods represent novel transfer learning solutions employed in the context of multi-view head pose classification. We demonstrate that the proposed solutions considerably outperform the state-of-the-art through extensive experimental validation. Finally, the DPOSE dataset compiled for benchmarking head pose classification performance with moving persons, and to aid behavioral understanding applications is presented in this work.

Exploring transfer learning approaches for head pose classification from multi-view surveillance images / Rajagopal, Anoop K; Subramanian, Ramanathan; Ricci, Elisa; Lanz, Oswald; Ramakrishnan, Kalpathi; Sebe, Niculae; Vieriu, Radu L.. - In: INTERNATIONAL JOURNAL OF COMPUTER VISION. - ISSN 0920-5691. - 2014, 209:1-2(2014), pp. 146-167. [10.1007/s11263-013-0692-2]

Exploring transfer learning approaches for head pose classification from multi-view surveillance images

Subramanian, Ramanathan;Ricci, Elisa;Lanz, Oswald;Sebe, Niculae;Vieriu, Radu L.
2014-01-01

Abstract

Head pose classification from surveillance images acquired with distant, large field-of-view cameras is difficult as faces are captured at low-resolution and have a blurred appearance. Domain adaptation approaches are useful for transferring knowledge from the training (source) to the test (target) data when they have different attributes, minimizing target data labeling efforts in the process. This paper examines the use of transfer learning for efficient multi-view head pose classification with minimal target training data under three challenging situations: (i) where the range of head poses in the source and target images is different, (ii) where source images capture a stationary person while target images capture a moving person whose facial appearance varies under motion due to changing perspective, scale and (iii) a combination of (i) and (ii). On the whole, the presented methods represent novel transfer learning solutions employed in the context of multi-view head pose classification. We demonstrate that the proposed solutions considerably outperform the state-of-the-art through extensive experimental validation. Finally, the DPOSE dataset compiled for benchmarking head pose classification performance with moving persons, and to aid behavioral understanding applications is presented in this work.
2014
1-2
Rajagopal, Anoop K; Subramanian, Ramanathan; Ricci, Elisa; Lanz, Oswald; Ramakrishnan, Kalpathi; Sebe, Niculae; Vieriu, Radu L.
Exploring transfer learning approaches for head pose classification from multi-view surveillance images / Rajagopal, Anoop K; Subramanian, Ramanathan; Ricci, Elisa; Lanz, Oswald; Ramakrishnan, Kalpathi; Sebe, Niculae; Vieriu, Radu L.. - In: INTERNATIONAL JOURNAL OF COMPUTER VISION. - ISSN 0920-5691. - 2014, 209:1-2(2014), pp. 146-167. [10.1007/s11263-013-0692-2]
File in questo prodotto:
File Dimensione Formato  
IJCV2014.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.66 MB
Formato Adobe PDF
3.66 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/68257
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 48
  • ???jsp.display-item.citation.isi??? 31
  • OpenAlex ND
social impact