Visible thermal person re-identification (VT-REID) is an important and challenging task in that 1) weak lighting environments are inevitably encountered in real-world settings and 2) the inter-modality discrepancy is serious. Most existing methods either aim at reducing the cross-modality gap in pixel- and feature-level or optimizing cross-modality network by metric learning techniques. However, few works have jointly considered these two aspects and studied their mutual benefits. In this paper, we design a novel framework to jointly bridge the modality gap in pixel- and feature-level without additional parameters, as well as reduce the inter- and intra-modalities variations by a center-guided metric learning constraint. Specifically, we introduce the Class-aware Modality Mix (CMM) to generate internal information of the two modalities for reducing the modality gap in pixel-level. In addition, we exploit the KL-divergence to further align modality distributions on feature-level. On the other hand, we propose an efficient Center-guided Metric Learning (CML) method for decreasing the discrepancy within the inter- and intra-modalities, by enforcing constraints on class centers and instances. Extensive experiments on two datasets show the mutual advantage of the proposed components and demonstrate the superiority of our method over the state of the art.

Class-Aware Modality Mix and Center-Guided Metric Learning for Visible-Thermal Person Re-Identification / Ling, Yongguo; Zhong, Zhun; Luo, Zhiming; Rota, Paolo; Li, Shaozi; Sebe, Nicu. - (2020), pp. 889-897. (Intervento presentato al convegno ACM MM ‘20 tenutosi a online (Seattle, United States) nel 12th-16th October 2020) [10.1145/3394171.3413821].

Class-Aware Modality Mix and Center-Guided Metric Learning for Visible-Thermal Person Re-Identification

Zhong, Zhun;Rota, Paolo;Sebe, Nicu
2020-01-01

Abstract

Visible thermal person re-identification (VT-REID) is an important and challenging task in that 1) weak lighting environments are inevitably encountered in real-world settings and 2) the inter-modality discrepancy is serious. Most existing methods either aim at reducing the cross-modality gap in pixel- and feature-level or optimizing cross-modality network by metric learning techniques. However, few works have jointly considered these two aspects and studied their mutual benefits. In this paper, we design a novel framework to jointly bridge the modality gap in pixel- and feature-level without additional parameters, as well as reduce the inter- and intra-modalities variations by a center-guided metric learning constraint. Specifically, we introduce the Class-aware Modality Mix (CMM) to generate internal information of the two modalities for reducing the modality gap in pixel-level. In addition, we exploit the KL-divergence to further align modality distributions on feature-level. On the other hand, we propose an efficient Center-guided Metric Learning (CML) method for decreasing the discrepancy within the inter- and intra-modalities, by enforcing constraints on class centers and instances. Extensive experiments on two datasets show the mutual advantage of the proposed components and demonstrate the superiority of our method over the state of the art.
2020
Proceedings of the 28th ACM International Conference on Multimedia
New York
ACM
9781450379885
Ling, Yongguo; Zhong, Zhun; Luo, Zhiming; Rota, Paolo; Li, Shaozi; Sebe, Nicu
Class-Aware Modality Mix and Center-Guided Metric Learning for Visible-Thermal Person Re-Identification / Ling, Yongguo; Zhong, Zhun; Luo, Zhiming; Rota, Paolo; Li, Shaozi; Sebe, Nicu. - (2020), pp. 889-897. (Intervento presentato al convegno ACM MM ‘20 tenutosi a online (Seattle, United States) nel 12th-16th October 2020) [10.1145/3394171.3413821].
File in questo prodotto:
File Dimensione Formato  
3394171.3413821.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.41 MB
Formato Adobe PDF
1.41 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/284580
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 37
  • ???jsp.display-item.citation.isi??? 31
social impact