Deep learning has brought an unprecedented progress in computer vision and significant advances have been made in predicting subjective properties inherent to visual data (e.g., memorability, aesthetic quality, evoked emotions, etc.). Recently, some research works have even proposed deep learning approaches to modify images such as to appropriately alter these properties. Following this research line, this paper introduces a novel deep learning framework for synthesizing images in order to enhance a predefined perceptual attribute. Our approach takes as input a natural image and exploits recent models for deep style trans-fer and generative adversarial networks to change its style in order to modify a specific high-level attribute. Differently from previous works focusing on enhancing a specific property of a visual content, we pro-pose a general framework and demonstrate its effectiveness in two use cases,i.e.increasing image memorability and generating scary pictures.We evaluate the proposed approach on publicly available benchmarks,demonstrating its advantages over state of the art methods.

Enhancing Perceptual Attributes with Bayesian Style Generation / Siarohin, A.; Zen, G.; Sebe, N.; Ricci, E.. - 11361 LNCS:(2019), pp. 483-498. (Intervento presentato al convegno ACCV tenutosi a Perth, Australia nel 2–6 December, 2018) [10.1007/978-3-030-20887-5_30].

Enhancing Perceptual Attributes with Bayesian Style Generation

A. Siarohin;G. Zen;N. Sebe;E. Ricci
2019-01-01

Abstract

Deep learning has brought an unprecedented progress in computer vision and significant advances have been made in predicting subjective properties inherent to visual data (e.g., memorability, aesthetic quality, evoked emotions, etc.). Recently, some research works have even proposed deep learning approaches to modify images such as to appropriately alter these properties. Following this research line, this paper introduces a novel deep learning framework for synthesizing images in order to enhance a predefined perceptual attribute. Our approach takes as input a natural image and exploits recent models for deep style trans-fer and generative adversarial networks to change its style in order to modify a specific high-level attribute. Differently from previous works focusing on enhancing a specific property of a visual content, we pro-pose a general framework and demonstrate its effectiveness in two use cases,i.e.increasing image memorability and generating scary pictures.We evaluate the proposed approach on publicly available benchmarks,demonstrating its advantages over state of the art methods.
2019
Asian Conference on Computer Vision
Heidelberg
Springer
Siarohin, A.; Zen, G.; Sebe, N.; Ricci, E.
Enhancing Perceptual Attributes with Bayesian Style Generation / Siarohin, A.; Zen, G.; Sebe, N.; Ricci, E.. - 11361 LNCS:(2019), pp. 483-498. (Intervento presentato al convegno ACCV tenutosi a Perth, Australia nel 2–6 December, 2018) [10.1007/978-3-030-20887-5_30].
File in questo prodotto:
File Dimensione Formato  
1812.00717.pdf

accesso aperto

Tipologia: Pre-print non referato (Non-refereed preprint)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 3.81 MB
Formato Adobe PDF
3.81 MB Adobe PDF Visualizza/Apri
Siarohin2019_Chapter_EnhancingPerceptualAttributesW (1).pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 4.81 MB
Formato Adobe PDF
4.81 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/250731
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact