Deep neural networks are ubiquitous in computer vision applications such as autonomous vehicles, face recognition, and medical imaging. However, deep neural networks may be subject to adversarial perturbations (also called adversarial attacks) which, when performed in high-stakes environments, may lead to severe consequences. A particular class of adversarial attacks, called universal adversarial attacks, consists of attacks that do not fool a single image. Instead, they aim to fool as many images as possible. Universal adversarial attacks may be either untargeted or targeted. While the goal of an untargeted attack is to deceive the neural network under attack without caring about its output, a targeted attack aims to “steer” the predictions of the neural network towards a specific target. In this work, we propose a method, based on the clonal selection algorithm, to generate targeted universal adversarial attacks for multiple target classes in a single run of the algorithm. Our results show that our approach is able to find multiple targeted universal adversarial attacks for the network under attack, with good success rates. Moreover, by visualizing the attacked images, we observe that our algorithm is able to produce adversarial attacks that fool the network by attacking a relatively small number of pixels. Finally, by analyzing how the attacks generalize on different architectures, we observe intriguing properties of the discovered universal adversarial attacks.

One run to attack them all: finding simultaneously multiple targeted adversarial perturbations / Custode, Leonardo Lucio; Iacca, Giovanni. - (2021), pp. 01-08. (Intervento presentato al convegno 2021 IEEE Symposium Series on Computational Intelligence, SSCI 2021 tenutosi a Orlando, FL, USA nel 5th -7th December 2021) [10.1109/SSCI50451.2021.9660002].

One run to attack them all: finding simultaneously multiple targeted adversarial perturbations

Custode, Leonardo Lucio;Iacca, Giovanni
2021-01-01

Abstract

Deep neural networks are ubiquitous in computer vision applications such as autonomous vehicles, face recognition, and medical imaging. However, deep neural networks may be subject to adversarial perturbations (also called adversarial attacks) which, when performed in high-stakes environments, may lead to severe consequences. A particular class of adversarial attacks, called universal adversarial attacks, consists of attacks that do not fool a single image. Instead, they aim to fool as many images as possible. Universal adversarial attacks may be either untargeted or targeted. While the goal of an untargeted attack is to deceive the neural network under attack without caring about its output, a targeted attack aims to “steer” the predictions of the neural network towards a specific target. In this work, we propose a method, based on the clonal selection algorithm, to generate targeted universal adversarial attacks for multiple target classes in a single run of the algorithm. Our results show that our approach is able to find multiple targeted universal adversarial attacks for the network under attack, with good success rates. Moreover, by visualizing the attacked images, we observe that our algorithm is able to produce adversarial attacks that fool the network by attacking a relatively small number of pixels. Finally, by analyzing how the attacks generalize on different architectures, we observe intriguing properties of the discovered universal adversarial attacks.
2021
IEEE Symposium Series on Computational Intelligence (SSCI)
345 E 47TH ST, NEW YORK, NY 10017 USA
IEEE
978-1-7281-9048-8
Custode, Leonardo Lucio; Iacca, Giovanni
One run to attack them all: finding simultaneously multiple targeted adversarial perturbations / Custode, Leonardo Lucio; Iacca, Giovanni. - (2021), pp. 01-08. (Intervento presentato al convegno 2021 IEEE Symposium Series on Computational Intelligence, SSCI 2021 tenutosi a Orlando, FL, USA nel 5th -7th December 2021) [10.1109/SSCI50451.2021.9660002].
File in questo prodotto:
File Dimensione Formato  
Adversarial_Attacks_with_ClonALG.pdf

Solo gestori archivio

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 513.77 kB
Formato Adobe PDF
513.77 kB Adobe PDF   Visualizza/Apri
One_run_to_attack_them_all_finding_simultaneously_multiple_targeted_adversarial_perturbations.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 531.57 kB
Formato Adobe PDF
531.57 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/329479
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact