In federated learning, Transformer, as a popular architecture, faces critical challenges in defending against gradient attacks and improving model performance in both Computer Vision (CV) and Natural Language Processing (NLP) tasks. It has been revealed that the gradient of Position Embeddings (PEs) in Transformer contains sufficient information, which can be used to reconstruct the input data. To mitigate this issue, we introduce a Masked Jigsaw Puzzle (MJP) framework. MJP starts with random token shuffling to break the token order, and then a learnable unknown (unk) position embedding is used to mask out the PEs of the shuffled tokens. In this manner, the local spatial information which is encoded in the position embeddings is disrupted, and the models are forced to learn feature representations that are less reliant on the local spatial information. Notably, with the careful use of MJP, we can not only improve models’ robustness against gradient attacks, but also boost their performance in both vision and text application scenarios, such as classification for images (e.g., ImageNet-1 K) and sentiment analysis for text (e.g., Yelp and Amazon). Experimental results suggest that MJP is a unified framework for different Transformer-based models in both vision and language tasks.

A Unified Masked Jigsaw Puzzle Framework for Vision and Language Models / Ye, Weixin; Wang, Wei; Liu, Yahui; Song, Yue; Ren, Bin; Bi, Wei; Cucchiara, Rita; Sebe, Nicu. - In: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE. - ISSN 0162-8828. - 48:2(2026), pp. 1873-1887. [10.1109/TPAMI.2025.3621246]

A Unified Masked Jigsaw Puzzle Framework for Vision and Language Models

Yahui Liu;Yue Song;Bin Ren;Nicu Sebe
2026-01-01

Abstract

In federated learning, Transformer, as a popular architecture, faces critical challenges in defending against gradient attacks and improving model performance in both Computer Vision (CV) and Natural Language Processing (NLP) tasks. It has been revealed that the gradient of Position Embeddings (PEs) in Transformer contains sufficient information, which can be used to reconstruct the input data. To mitigate this issue, we introduce a Masked Jigsaw Puzzle (MJP) framework. MJP starts with random token shuffling to break the token order, and then a learnable unknown (unk) position embedding is used to mask out the PEs of the shuffled tokens. In this manner, the local spatial information which is encoded in the position embeddings is disrupted, and the models are forced to learn feature representations that are less reliant on the local spatial information. Notably, with the careful use of MJP, we can not only improve models’ robustness against gradient attacks, but also boost their performance in both vision and text application scenarios, such as classification for images (e.g., ImageNet-1 K) and sentiment analysis for text (e.g., Yelp and Amazon). Experimental results suggest that MJP is a unified framework for different Transformer-based models in both vision and language tasks.
2026
2
Ye, Weixin; Wang, Wei; Liu, Yahui; Song, Yue; Ren, Bin; Bi, Wei; Cucchiara, Rita; Sebe, Nicu
A Unified Masked Jigsaw Puzzle Framework for Vision and Language Models / Ye, Weixin; Wang, Wei; Liu, Yahui; Song, Yue; Ren, Bin; Bi, Wei; Cucchiara, Rita; Sebe, Nicu. - In: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE. - ISSN 0162-8828. - 48:2(2026), pp. 1873-1887. [10.1109/TPAMI.2025.3621246]
File in questo prodotto:
File Dimensione Formato  
A_Unified_Masked_Jigsaw_Puzzle_Framework_for_Vision_and_Language_Models.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 8.28 MB
Formato Adobe PDF
8.28 MB Adobe PDF Visualizza/Apri
A_Unified_Masked_Jigsaw_Puzzle_Framework_for_Vision_and_Language_Models (1).pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 8.92 MB
Formato Adobe PDF
8.92 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/474154
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex 0
social impact