This paper presents a novel method for detecting pedestrians under adverse illumination conditions. Our approach relies on a novel cross-modality learning framework and it is based on two main phases. First, given a multimodal dataset, a deep convolutional network is employed to learn a non-linear mapping, modeling the relations between RGB and thermal data. Then, the learned feature representations are transferred to a second deep network, which receives as input an RGB image and outputs the detection results. In this way, features which are both discriminative and robust to bad illumination conditions are learned. Importantly, at test time, only the second pipeline is considered and no thermal data are required. Our extensive evaluation demonstrates that the proposed approach outperforms the state-ofthe-art on the challenging KAIST multispectral pedestrian dataset and it is competitive with previous methods on the popular Caltech dataset.

Learning Cross-Modal Deep Representations for Robust Pedestrian Detection / Xu, Dan; Ouyang, Wanli; Ricci, Elisa; Wang, Xiaogang; Sebe, Nicu. - (2017), pp. 4236-4244. (Intervento presentato al convegno 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) tenutosi a Honolulu nel 21-26 JUL , 2017) [10.1109/CVPR.2017.451].

Learning Cross-Modal Deep Representations for Robust Pedestrian Detection

Xu, Dan;Ricci, Elisa;Sebe, Nicu
2017-01-01

Abstract

This paper presents a novel method for detecting pedestrians under adverse illumination conditions. Our approach relies on a novel cross-modality learning framework and it is based on two main phases. First, given a multimodal dataset, a deep convolutional network is employed to learn a non-linear mapping, modeling the relations between RGB and thermal data. Then, the learned feature representations are transferred to a second deep network, which receives as input an RGB image and outputs the detection results. In this way, features which are both discriminative and robust to bad illumination conditions are learned. Importantly, at test time, only the second pipeline is considered and no thermal data are required. Our extensive evaluation demonstrates that the proposed approach outperforms the state-ofthe-art on the challenging KAIST multispectral pedestrian dataset and it is competitive with previous methods on the popular Caltech dataset.
2017
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017)
Piscataway, NJ
IEEE
978-1-5386-0457-1
Xu, Dan; Ouyang, Wanli; Ricci, Elisa; Wang, Xiaogang; Sebe, Nicu
Learning Cross-Modal Deep Representations for Robust Pedestrian Detection / Xu, Dan; Ouyang, Wanli; Ricci, Elisa; Wang, Xiaogang; Sebe, Nicu. - (2017), pp. 4236-4244. (Intervento presentato al convegno 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017) tenutosi a Honolulu nel 21-26 JUL , 2017) [10.1109/CVPR.2017.451].
File in questo prodotto:
File Dimensione Formato  
Xu_Learning_Cross-Modal_Deep_CVPR_2017_paper.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.65 MB
Formato Adobe PDF
1.65 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/193402
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 157
  • ???jsp.display-item.citation.isi??? 117
social impact