Owing to the power of vision-language foundation models, e.g., CLIP, the area of image synthesis has seen recent important advances. Particularly, for style transfer, CLIP enables transferring more general and abstract styles without collecting the style images in advance, as the style can be efficiently described with natural language, and the result is optimized by minimizing the CLIP similarity between the text description and the stylized image. However, directly using CLIP to guide style transfer leads to undesirable artifacts (mainly written words and unrelated visual entities) spread over the image. In this paper, we propose SpectralCLIP, which is based on a spectral representation of the CLIP embedding sequence, where most of the common artifacts occupy specific frequencies. By masking the band including these frequencies, we can condition the generation process to adhere to the target style properties (e.g., color, texture, paint stroke, etc.) while excluding the generation of larger-scale structures corresponding to the artifacts. Experimental results show that SpectralCLIP prevents the generation of artifacts effectively in quantitative and qualitative terms, without impairing the stylisation quality. We also apply SpectralCLIP to text-conditioned image generation and show that it prevents written words in the generated images. Our code is available at https://github.com/zipengxuc/SpectralCLIP.

SpectralCLIP: Preventing Artifacts in Text-Guided Style Transfer From a Spectral Perspective / Xu, Zipeng; Xing, Songlong; Sangineto, Enver; Sebe, Nicu. - ELETTRONICO. - (2024), pp. 5109-5118. (Intervento presentato al convegno 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024 tenutosi a Waikoloa, Hawaii, USA nel 4th January-8th January 2024) [10.1109/WACV57701.2024.00504].

SpectralCLIP: Preventing Artifacts in Text-Guided Style Transfer From a Spectral Perspective

Xu, Zipeng;Xing, Songlong;Sangineto, Enver;Sebe, Nicu
2024-01-01

Abstract

Owing to the power of vision-language foundation models, e.g., CLIP, the area of image synthesis has seen recent important advances. Particularly, for style transfer, CLIP enables transferring more general and abstract styles without collecting the style images in advance, as the style can be efficiently described with natural language, and the result is optimized by minimizing the CLIP similarity between the text description and the stylized image. However, directly using CLIP to guide style transfer leads to undesirable artifacts (mainly written words and unrelated visual entities) spread over the image. In this paper, we propose SpectralCLIP, which is based on a spectral representation of the CLIP embedding sequence, where most of the common artifacts occupy specific frequencies. By masking the band including these frequencies, we can condition the generation process to adhere to the target style properties (e.g., color, texture, paint stroke, etc.) while excluding the generation of larger-scale structures corresponding to the artifacts. Experimental results show that SpectralCLIP prevents the generation of artifacts effectively in quantitative and qualitative terms, without impairing the stylisation quality. We also apply SpectralCLIP to text-conditioned image generation and show that it prevents written words in the generated images. Our code is available at https://github.com/zipengxuc/SpectralCLIP.
2024
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
Piscataway, NJ USA
Institute of Electrical and Electronics Engineers Inc.
979-8-3503-1892-0
Xu, Zipeng; Xing, Songlong; Sangineto, Enver; Sebe, Nicu
SpectralCLIP: Preventing Artifacts in Text-Guided Style Transfer From a Spectral Perspective / Xu, Zipeng; Xing, Songlong; Sangineto, Enver; Sebe, Nicu. - ELETTRONICO. - (2024), pp. 5109-5118. (Intervento presentato al convegno 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024 tenutosi a Waikoloa, Hawaii, USA nel 4th January-8th January 2024) [10.1109/WACV57701.2024.00504].
File in questo prodotto:
File Dimensione Formato  
Xu_SpectralCLIP_Preventing_Artifacts_in_Text-Guided_Style_Transfer_From_a_Spectral_WACV_2024_paper.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 7.4 MB
Formato Adobe PDF
7.4 MB Adobe PDF Visualizza/Apri
SpectralCLIP_Preventing_Artifacts_in_Text-Guided_Style_Transfer_from_a_Spectral_Perspective (1).pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 7.8 MB
Formato Adobe PDF
7.8 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/399937
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact