The term “mimicry attack” has been coined in computer security and used in adversarial machine learning: an attacker observes what a machine-learning system has learned and adjusts the malicious input so that it mimics a benign input. In this paper we extend this concept to image forensics, to allow an attacker modifying a manipulated image so that it appears pristine when analyzed by a target forensic detector. Recent work has shown that such attacks can be executed against detectors based on deep networks for hiding image tampering. We do more than that: our mimicry attack can force the target detector to identify arbitrary fictitious manipulations, while hiding the true ones. Accordingly, the user of the forensic detector is completely misled. From a methodological viewpoint, the proposed attack artificially alters the detector-specific intermediate representations according to the pixel distribution in the manipulated image, by applying a gradient-based optimization process. Experimental tests on different data sets and detectors demonstrate that our approach succeeds in jointly hiding manipulated areas and arbitrarily adding new ones, favorably comparing with the state-of-the-art in the first task.

Adversarial mimicry attacks against image splicing forensics: An approach for jointly hiding manipulations and creating false detections / Boato, Giulia; De Natale, Francesco G. B.; De Stefano, Gianluca; Pasquini, Cecilia; Roli, Fabio. - In: PATTERN RECOGNITION LETTERS. - ISSN 0167-8655. - 2024, 179:(2024), pp. 73-79. [10.1016/j.patrec.2024.01.023]

Adversarial mimicry attacks against image splicing forensics: An approach for jointly hiding manipulations and creating false detections

Boato, Giulia;De Natale, Francesco G. B.;Pasquini, Cecilia;Roli, Fabio
2024-01-01

Abstract

The term “mimicry attack” has been coined in computer security and used in adversarial machine learning: an attacker observes what a machine-learning system has learned and adjusts the malicious input so that it mimics a benign input. In this paper we extend this concept to image forensics, to allow an attacker modifying a manipulated image so that it appears pristine when analyzed by a target forensic detector. Recent work has shown that such attacks can be executed against detectors based on deep networks for hiding image tampering. We do more than that: our mimicry attack can force the target detector to identify arbitrary fictitious manipulations, while hiding the true ones. Accordingly, the user of the forensic detector is completely misled. From a methodological viewpoint, the proposed attack artificially alters the detector-specific intermediate representations according to the pixel distribution in the manipulated image, by applying a gradient-based optimization process. Experimental tests on different data sets and detectors demonstrate that our approach succeeds in jointly hiding manipulated areas and arbitrarily adding new ones, favorably comparing with the state-of-the-art in the first task.
2024
Boato, Giulia; De Natale, Francesco G. B.; De Stefano, Gianluca; Pasquini, Cecilia; Roli, Fabio
Adversarial mimicry attacks against image splicing forensics: An approach for jointly hiding manipulations and creating false detections / Boato, Giulia; De Natale, Francesco G. B.; De Stefano, Gianluca; Pasquini, Cecilia; Roli, Fabio. - In: PATTERN RECOGNITION LETTERS. - ISSN 0167-8655. - 2024, 179:(2024), pp. 73-79. [10.1016/j.patrec.2024.01.023]
File in questo prodotto:
File Dimensione Formato  
mimicry_attacks_PRL (5).pdf

Solo gestori archivio

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.48 MB
Formato Adobe PDF
2.48 MB Adobe PDF   Visualizza/Apri
Adversarial mimicry attacks against ima...ulations and creating false detections.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 876.29 kB
Formato Adobe PDF
876.29 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/431230
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
  • OpenAlex ND
social impact