Vehicle Re-Identification (Re-ID) is a challenging vision task mainly because the appearance of a vehicle varies dramatically under different viewpoints. Moreover, different vehicles with the same model and color commonly show similar appearance, thus are hard to be distinguished. To alleviate negative effects of viewpoint variance, we design a multi-view branch network where each branch learns a viewpoint-specific feature without parameter sharing. Being able to focus on a limited range of viewpoints, this viewpoint-specific feature performs substantially better than the general feature learned by an uniform network. To further differentiate visually similar vehicles, we strengthen the discriminative power on their subtle local differences by introducing a spatial attention model into each feature learning branch. The multi-view feature learning and spatial attention learning compose our neural network architecture, which is trained end to end with the softmax loss and triplet loss, respectively. We evaluate our methods on two large vehicle Re-ID datasets, i.e., VehicleID and VeRi-776, respectively. Extensive experiments show that our methods achieve promising performance. For example, we achieve mAP accuracy of 76.78% and 72.53% on VehicleID and VeRi-776 dataset respectively, substantially better than current state-of-the art.

Multi-View Spatial Attention Embedding for Vehicle Re-Identification / Teng, S.; Zhang, S.; Huang, Q.; Sebe, N.. - In: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY. - ISSN 1051-8215. - 31:2(2021), pp. 816-827. [10.1109/TCSVT.2020.2980283]

Multi-View Spatial Attention Embedding for Vehicle Re-Identification

Sebe N.
2021-01-01

Abstract

Vehicle Re-Identification (Re-ID) is a challenging vision task mainly because the appearance of a vehicle varies dramatically under different viewpoints. Moreover, different vehicles with the same model and color commonly show similar appearance, thus are hard to be distinguished. To alleviate negative effects of viewpoint variance, we design a multi-view branch network where each branch learns a viewpoint-specific feature without parameter sharing. Being able to focus on a limited range of viewpoints, this viewpoint-specific feature performs substantially better than the general feature learned by an uniform network. To further differentiate visually similar vehicles, we strengthen the discriminative power on their subtle local differences by introducing a spatial attention model into each feature learning branch. The multi-view feature learning and spatial attention learning compose our neural network architecture, which is trained end to end with the softmax loss and triplet loss, respectively. We evaluate our methods on two large vehicle Re-ID datasets, i.e., VehicleID and VeRi-776, respectively. Extensive experiments show that our methods achieve promising performance. For example, we achieve mAP accuracy of 76.78% and 72.53% on VehicleID and VeRi-776 dataset respectively, substantially better than current state-of-the art.
2021
2
Teng, S.; Zhang, S.; Huang, Q.; Sebe, N.
Multi-View Spatial Attention Embedding for Vehicle Re-Identification / Teng, S.; Zhang, S.; Huang, Q.; Sebe, N.. - In: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY. - ISSN 1051-8215. - 31:2(2021), pp. 816-827. [10.1109/TCSVT.2020.2980283]
File in questo prodotto:
File Dimensione Formato  
Multi-View_Spatial_Attention_Embedding_for_Vehicle_Re-Identification.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.02 MB
Formato Adobe PDF
3.02 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/326162
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 31
  • ???jsp.display-item.citation.isi??? 25
social impact