Despite the significant progress of conditional image generation, it remains difficult to synthesize a ground-view panorama image from a top-view aerial image. Among the core challenges are the vast differences in image appearance and resolution between aerial images and panorama images, and the limited aside information available for top-to-ground viewpoint transformation. To address these challenges, we propose a new Progressive Attention Generative Adversarial Network (PAGAN) with two novel components: a multistage progressive generation framework and a cross-stage attention module. In the first stage, an aerial image is fed into a U-Net-like network to generate one local region of the panorama image and its corresponding segmentation map. Then, the synthetic panorama image region is extended and refined through the following generation stages with our proposed cross-stage attention module that passes semantic information forward stage-by-stage. In each of the successive generation stages, the synthetic panorama image and segmentation map are separately fed into an image discriminator and a segmentation discriminator to compute both later real and fake, as well as feature alignment score maps for discrimination. The model is trained with a novel orientation-aware data augmentation strategy based on the geometric relation between aerial and panorama images. Extensive experimental results on two cross-view datasets show that PAGAN generates high-quality panorama images with more convincing details than state-of-the-art methods.

Cross-view panorama image synthesis with progressive attention GANs / Wu, S.; Tang, H.; Jing, X. -Y.; Qian, J.; Sebe, N.; Yan, Yan; Zhang, Q.. - In: PATTERN RECOGNITION. - ISSN 0031-3203. - 131:(2022), pp. 10888401-10888413. [10.1016/j.patcog.2022.108884]

Cross-view panorama image synthesis with progressive attention GANs

Tang H.;Sebe N.;Yan Y.;
2022-01-01

Abstract

Despite the significant progress of conditional image generation, it remains difficult to synthesize a ground-view panorama image from a top-view aerial image. Among the core challenges are the vast differences in image appearance and resolution between aerial images and panorama images, and the limited aside information available for top-to-ground viewpoint transformation. To address these challenges, we propose a new Progressive Attention Generative Adversarial Network (PAGAN) with two novel components: a multistage progressive generation framework and a cross-stage attention module. In the first stage, an aerial image is fed into a U-Net-like network to generate one local region of the panorama image and its corresponding segmentation map. Then, the synthetic panorama image region is extended and refined through the following generation stages with our proposed cross-stage attention module that passes semantic information forward stage-by-stage. In each of the successive generation stages, the synthetic panorama image and segmentation map are separately fed into an image discriminator and a segmentation discriminator to compute both later real and fake, as well as feature alignment score maps for discrimination. The model is trained with a novel orientation-aware data augmentation strategy based on the geometric relation between aerial and panorama images. Extensive experimental results on two cross-view datasets show that PAGAN generates high-quality panorama images with more convincing details than state-of-the-art methods.
2022
Wu, S.; Tang, H.; Jing, X. -Y.; Qian, J.; Sebe, N.; Yan, Yan; Zhang, Q.
Cross-view panorama image synthesis with progressive attention GANs / Wu, S.; Tang, H.; Jing, X. -Y.; Qian, J.; Sebe, N.; Yan, Yan; Zhang, Q.. - In: PATTERN RECOGNITION. - ISSN 0031-3203. - 131:(2022), pp. 10888401-10888413. [10.1016/j.patcog.2022.108884]
File in questo prodotto:
File Dimensione Formato  
PR22-Cross-view.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.84 MB
Formato Adobe PDF
3.84 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/361266
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 11
  • ???jsp.display-item.citation.isi??? 8
  • OpenAlex ND
social impact