In this paper, we tackle the problem of synthesizing a ground-view panorama image conditioned on a top-view aerial image, which is a challenging problem due to the large gap between the two image domains with different view-points. Instead of learning cross-view mapping in a feedforward pass, we propose a novel adversarial feedback GAN framework named PanoGAN with two key components: an adversarial feedback module and a dual branch discrimination strategy. First, the aerial image is fed into the generator to produce a target panorama image and its associated segmentation map in favor of model training with layout semantics. Second, the feature responses of the discriminator encoded by our adversarial feedback module are fed back to the generator to refine the intermediate representations, so that the generation performance is continually improved through an iterative generation process. Third, to pursue high-fidelity and semantic consistency of the generated panorama image, we propose a pixel-segmentation alignment mechanism under the dual branch discrimiantion strategy to facilitate cooperation between the generator and the discriminator. Extensive experimental results on two challenging cross-view image datasets show that PanoGAN enables high-quality panorama image generation with more convincing details than state-of-the-art approaches. The source code and trained models are available at https://github.com/sswuai/PanoGAN.

Cross-View Panorama Image Synthesis / Wu, S.; Tang, H.; Jing, X. -Y.; Zhao, H.; Qian, J.; Sebe, N.; Yan, Yan. - In: IEEE TRANSACTIONS ON MULTIMEDIA. - ISSN 1520-9210. - 25:(2023), pp. 3546-3559. [10.1109/TMM.2022.3162474]

Cross-View Panorama Image Synthesis

Tang, H.;Zhao, H.;Sebe, N.;Yan, Y.
2023-01-01

Abstract

In this paper, we tackle the problem of synthesizing a ground-view panorama image conditioned on a top-view aerial image, which is a challenging problem due to the large gap between the two image domains with different view-points. Instead of learning cross-view mapping in a feedforward pass, we propose a novel adversarial feedback GAN framework named PanoGAN with two key components: an adversarial feedback module and a dual branch discrimination strategy. First, the aerial image is fed into the generator to produce a target panorama image and its associated segmentation map in favor of model training with layout semantics. Second, the feature responses of the discriminator encoded by our adversarial feedback module are fed back to the generator to refine the intermediate representations, so that the generation performance is continually improved through an iterative generation process. Third, to pursue high-fidelity and semantic consistency of the generated panorama image, we propose a pixel-segmentation alignment mechanism under the dual branch discrimiantion strategy to facilitate cooperation between the generator and the discriminator. Extensive experimental results on two challenging cross-view image datasets show that PanoGAN enables high-quality panorama image generation with more convincing details than state-of-the-art approaches. The source code and trained models are available at https://github.com/sswuai/PanoGAN.
2023
Wu, S.; Tang, H.; Jing, X. -Y.; Zhao, H.; Qian, J.; Sebe, N.; Yan, Yan
Cross-View Panorama Image Synthesis / Wu, S.; Tang, H.; Jing, X. -Y.; Zhao, H.; Qian, J.; Sebe, N.; Yan, Yan. - In: IEEE TRANSACTIONS ON MULTIMEDIA. - ISSN 1520-9210. - 25:(2023), pp. 3546-3559. [10.1109/TMM.2022.3162474]
File in questo prodotto:
File Dimensione Formato  
Cross-View_TMM22.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 6.8 MB
Formato Adobe PDF
6.8 MB Adobe PDF Visualizza/Apri
Cross-View_Panorama_Image_Synthesis_compressed (1).pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.01 MB
Formato Adobe PDF
1.01 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/393009
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 3
  • OpenAlex ND
social impact