We propose AnonyGAN, a GAN-based solution for face anonymisation which replaces the visual information corresponding to a source identity with a condition identity provided as any single image. With the goal to maintain the geometric attributes of the source face, i.e., the facial pose and expression, and to promote more natural face generation, we propose to exploit a Bipartite Graph to explicitly model the relations between the facial landmarks of the source identity and the ones of the condition identity through a deep model. We further propose a landmark attention model to relax the manual selection of facial landmarks, allowing the network to weight the landmarks for the best visual naturalness and pose preservation. Finally, to facilitate the appearance learning, we propose a hybrid training strategy to address the challenge caused by the lack of direct pixel-level supervision. We evaluate our method and its variants on two public datasets, CelebA and LFW, in terms of visual naturalness, facial pose preservation and of its impacts on face detection and re-identification. We prove that AnonyGAN significantly outperforms the state-of-the-art methods in terms of visual naturalness, face detection and pose preservation. Code and pretrained model are available at https://github.com/Fodark/anonygan.
Graph-Based Generative Face Anonymisation with Pose Preservation / Dall'Asen, Nicola; Wang, Yiming; Tang, Hao; Zanella, Luca; Ricci, Elisa. - 13232:(2022), pp. 503-515. (Intervento presentato al convegno International Conference on Image Analysis and Processing tenutosi a Lecce nel 23rd May-27th May 2022) [10.1007/978-3-031-06430-2_42].
Graph-Based Generative Face Anonymisation with Pose Preservation
Dall'Asen, Nicola;Wang, Yiming;Tang, Hao;Zanella, Luca;Ricci, Elisa
2022-01-01
Abstract
We propose AnonyGAN, a GAN-based solution for face anonymisation which replaces the visual information corresponding to a source identity with a condition identity provided as any single image. With the goal to maintain the geometric attributes of the source face, i.e., the facial pose and expression, and to promote more natural face generation, we propose to exploit a Bipartite Graph to explicitly model the relations between the facial landmarks of the source identity and the ones of the condition identity through a deep model. We further propose a landmark attention model to relax the manual selection of facial landmarks, allowing the network to weight the landmarks for the best visual naturalness and pose preservation. Finally, to facilitate the appearance learning, we propose a hybrid training strategy to address the challenge caused by the lack of direct pixel-level supervision. We evaluate our method and its variants on two public datasets, CelebA and LFW, in terms of visual naturalness, facial pose preservation and of its impacts on face detection and re-identification. We prove that AnonyGAN significantly outperforms the state-of-the-art methods in terms of visual naturalness, face detection and pose preservation. Code and pretrained model are available at https://github.com/Fodark/anonygan.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione