This paper tackles the problem of federated domain generalization in person re-identification (FedDG re-ID), aiming to learn a model generalizable to unseen domains with decentralized source domains. Previous methods mainly focus on preventing local overfitting. However, the direction of diversifying local data through stylization for model training is largely overlooked. This direction is popular in domain generalization but will encounter two issues under federated scenario: (1) Most stylization methods require the centralization of multiple domains to generate novel styles but this is not applicable under decentralized constraint. (2) The authenticity of generated data cannot be ensured especially given limited local data, which may impair the model optimization. To solve these two problems, we propose the Diversity-Authenticity Co-constrained Stylization (DACS), which can generate diverse and authentic data for learning robust local model. Specifically, we deploy a style transformation model on each domain to generate novel data with two constraints: (1) A diversity constraint is designed to increase data diversity, which enlarges the Wasserstein distance between the original and transformed data; (2) An authenticity constraint is proposed to ensure data authenticity, which enforces the transformed data to be easily/hardly recognized by the local-side global/local model. Extensive experiments demonstrate the effectiveness of the proposed DACS and show that DACS achieves state-of-the-art performance for FedDG re-ID.

Diversity-Authenticity Co-constrained Stylization for Federated Domain Generalization in Person Re-identification / Yang, F.; Zhong, Z.; Luo, Z.; He, Y.; Li, S.; Sebe, N.. - 38:6(2024), pp. 6477-6485. (Intervento presentato al convegno 38th AAAI Conference on Artificial Intelligence, AAAI 2024 tenutosi a Vancouver, Canada nel 20–27 February, 2024) [10.1609/aaai.v38i6.28468].

Diversity-Authenticity Co-constrained Stylization for Federated Domain Generalization in Person Re-identification

Yang, F.;Zhong, Z.;Sebe, N.
2024-01-01

Abstract

This paper tackles the problem of federated domain generalization in person re-identification (FedDG re-ID), aiming to learn a model generalizable to unseen domains with decentralized source domains. Previous methods mainly focus on preventing local overfitting. However, the direction of diversifying local data through stylization for model training is largely overlooked. This direction is popular in domain generalization but will encounter two issues under federated scenario: (1) Most stylization methods require the centralization of multiple domains to generate novel styles but this is not applicable under decentralized constraint. (2) The authenticity of generated data cannot be ensured especially given limited local data, which may impair the model optimization. To solve these two problems, we propose the Diversity-Authenticity Co-constrained Stylization (DACS), which can generate diverse and authentic data for learning robust local model. Specifically, we deploy a style transformation model on each domain to generate novel data with two constraints: (1) A diversity constraint is designed to increase data diversity, which enlarges the Wasserstein distance between the original and transformed data; (2) An authenticity constraint is proposed to ensure data authenticity, which enforces the transformed data to be easily/hardly recognized by the local-side global/local model. Extensive experiments demonstrate the effectiveness of the proposed DACS and show that DACS achieves state-of-the-art performance for FedDG re-ID.
2024
Proceedings of the AAAI Conference on Artificial Intelligence
New York
Association for the Advancement of Artificial Intelligence
Yang, F.; Zhong, Z.; Luo, Z.; He, Y.; Li, S.; Sebe, N.
Diversity-Authenticity Co-constrained Stylization for Federated Domain Generalization in Person Re-identification / Yang, F.; Zhong, Z.; Luo, Z.; He, Y.; Li, S.; Sebe, N.. - 38:6(2024), pp. 6477-6485. (Intervento presentato al convegno 38th AAAI Conference on Artificial Intelligence, AAAI 2024 tenutosi a Vancouver, Canada nel 20–27 February, 2024) [10.1609/aaai.v38i6.28468].
File in questo prodotto:
File Dimensione Formato  
28468-Article Text-32522-1-2-20240324.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 753.07 kB
Formato Adobe PDF
753.07 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/409372
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact