The state-of-the-art approaches in Generative Adversarial Networks (GANs) are able to learn a mapping function from one image domain to another with unpaired image data. However, these methods often produce artifacts and can only be able to convert low-level information, but fail to transfer high-level semantic part of images. The reason is mainly that generators do not have the ability to detect the most discriminative semantic part of images, which thus makes the generated images with low-quality. To handle the limitation, in this paper we propose a novel Attention-Guided Generative Adversarial Network (AGGAN), which can detect the most discriminative semantic object and minimize changes of unwanted part for semantic manipulation problems without using extra data and models. The attention-guided generators in AGGAN are able to produce attention masks via a built-in attention mechanism, and then fuse the input image with the attention mask to obtain a target image with high-quality. Moreover, we propose a novel attention-guided discriminator which only considers attended regions. The proposed AGGAN is trained by an end-to-end fashion with an adversarial loss, cycle-consistency loss, pixel loss and attention loss. Both qualitative and quantitative results demonstrate that our approach is effective to generate sharper and more accurate images than existing models.

Attention-Guided Generative Adversarial Networks for Unsupervised Image-to-Image Translation / Tang, Hao; Xu, D.; Sebe, N.; Yan, Yan. - 2019:(2019), pp. 1-8. (Intervento presentato al convegno 2019 International Joint Conference on Neural Networks, IJCNN 2019 tenutosi a Budapest nel 14-19 July , 2019) [10.1109/IJCNN.2019.8851881].

Attention-Guided Generative Adversarial Networks for Unsupervised Image-to-Image Translation

Tang, Hao;Xu D.;Sebe N.;Yan Y.
2019-01-01

Abstract

The state-of-the-art approaches in Generative Adversarial Networks (GANs) are able to learn a mapping function from one image domain to another with unpaired image data. However, these methods often produce artifacts and can only be able to convert low-level information, but fail to transfer high-level semantic part of images. The reason is mainly that generators do not have the ability to detect the most discriminative semantic part of images, which thus makes the generated images with low-quality. To handle the limitation, in this paper we propose a novel Attention-Guided Generative Adversarial Network (AGGAN), which can detect the most discriminative semantic object and minimize changes of unwanted part for semantic manipulation problems without using extra data and models. The attention-guided generators in AGGAN are able to produce attention masks via a built-in attention mechanism, and then fuse the input image with the attention mask to obtain a target image with high-quality. Moreover, we propose a novel attention-guided discriminator which only considers attended regions. The proposed AGGAN is trained by an end-to-end fashion with an adversarial loss, cycle-consistency loss, pixel loss and attention loss. Both qualitative and quantitative results demonstrate that our approach is effective to generate sharper and more accurate images than existing models.
2019
Proceedings of the International Joint Conference on Neural Networks
New York
Institute of Electrical and Electronics Engineers Inc.
c
Tang, Hao; Xu, D.; Sebe, N.; Yan, Yan
Attention-Guided Generative Adversarial Networks for Unsupervised Image-to-Image Translation / Tang, Hao; Xu, D.; Sebe, N.; Yan, Yan. - 2019:(2019), pp. 1-8. (Intervento presentato al convegno 2019 International Joint Conference on Neural Networks, IJCNN 2019 tenutosi a Budapest nel 14-19 July , 2019) [10.1109/IJCNN.2019.8851881].
File in questo prodotto:
File Dimensione Formato  
1903.12296.pdf

Open Access dal 01/10/2021

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 7.69 MB
Formato Adobe PDF
7.69 MB Adobe PDF Visualizza/Apri
08851881.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 7.68 MB
Formato Adobe PDF
7.68 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/250761
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 85
  • ???jsp.display-item.citation.isi??? 62
social impact