State-of-the-art methods in the image-to-image translation are capable of learning a mapping from a source domain to a target domain with unpaired image data. Though the existing methods have achieved promising results, they still produce visual artifacts, being able to translate low-level information but not high-level semantics of input images. One possible reason is that generators do not have the ability to perceive the most discriminative parts between the source and target domains, thus making the generated images low quality. In this article, we propose a new Attention-Guided Generative Adversarial Networks (AttentionGAN) for the unpaired image-to-image translation task. AttentionGAN can identify the most discriminative foreground objects and minimize the change of the background. The attention-guided generators in AttentionGAN are able to produce attention masks, and then fuse the generation output with the attention masks to obtain high-quality target images. Accordingly, we also design a novel attention-guided discriminator which only considers attended regions. Extensive experiments are conducted on several generative tasks with eight public datasets, demonstrating that the proposed method is effective to generate sharper and more realistic images compared with existing competitive models. The code is available at https://github.com/Ha0Tang/AttentionGAN.

AttentionGAN: Unpaired Image-to-Image Translation Using Attention-Guided Generative Adversarial Networks / Tang, H.; Liu, H.; Xu, D.; Torr, P. H. S.; Sebe, N.. - In: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS. - ISSN 2162-237X. - 34:4(2023), pp. 1972-1987. [10.1109/TNNLS.2021.3105725]

AttentionGAN: Unpaired Image-to-Image Translation Using Attention-Guided Generative Adversarial Networks

Tang H.;Xu D.;Sebe N.
2023-01-01

Abstract

State-of-the-art methods in the image-to-image translation are capable of learning a mapping from a source domain to a target domain with unpaired image data. Though the existing methods have achieved promising results, they still produce visual artifacts, being able to translate low-level information but not high-level semantics of input images. One possible reason is that generators do not have the ability to perceive the most discriminative parts between the source and target domains, thus making the generated images low quality. In this article, we propose a new Attention-Guided Generative Adversarial Networks (AttentionGAN) for the unpaired image-to-image translation task. AttentionGAN can identify the most discriminative foreground objects and minimize the change of the background. The attention-guided generators in AttentionGAN are able to produce attention masks, and then fuse the generation output with the attention masks to obtain high-quality target images. Accordingly, we also design a novel attention-guided discriminator which only considers attended regions. Extensive experiments are conducted on several generative tasks with eight public datasets, demonstrating that the proposed method is effective to generate sharper and more realistic images compared with existing competitive models. The code is available at https://github.com/Ha0Tang/AttentionGAN.
2023
4
Tang, H.; Liu, H.; Xu, D.; Torr, P. H. S.; Sebe, N.
AttentionGAN: Unpaired Image-to-Image Translation Using Attention-Guided Generative Adversarial Networks / Tang, H.; Liu, H.; Xu, D.; Torr, P. H. S.; Sebe, N.. - In: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS. - ISSN 2162-237X. - 34:4(2023), pp. 1972-1987. [10.1109/TNNLS.2021.3105725]
File in questo prodotto:
File Dimensione Formato  
TNNLS21-AttentionGAN.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.67 MB
Formato Adobe PDF
1.67 MB Adobe PDF Visualizza/Apri
AttentionGAN_Unpaired_Image-to-Image_Translation_Using_Attention-Guided_Generative_Adversarial_Networks.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 7.27 MB
Formato Adobe PDF
7.27 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/377288
Citazioni
  • ???jsp.display-item.citation.pmc??? 9
  • Scopus 99
  • ???jsp.display-item.citation.isi??? 90
  • OpenAlex ND
social impact