In the last decade, deep neural networks have proven to be very powerful in computer vision tasks, starting a revolution in the computer vision and machine learning fields. However, deep neural networks, usually, are not robust to perturbations of the input data. In fact, several studies showed that slightly changing the content of the images can cause a dramatic decrease in the accuracy of the attacked neural network. Several methods able to generate adversarial samples make use of gradients, which usually are not available to an attacker in real-world scenarios. As opposed to this class of attacks, another class of adversarial attacks, called black-box adversarial attacks, emerged, which does not make use of information on the gradients, being more suitable for real-world attack scenarios. In this work, we compare three well-known evolution strategies on the generation of black-box adversarial attacks for image classification tasks. While our results show that the attacked neural networks can be, in most cases, easily fooled by all the algorithms under comparison, they also show that some black-box optimization algorithms may be better in "harder"setups, both in terms of attack success rate and efficiency (i.e., number of queries).

Black-box adversarial attacks using evolution strategies / Qiu, H.; Custode, L. L.; Iacca, G.. - (2021), pp. 1827-1833. (Intervento presentato al convegno 2021 Genetic and Evolutionary Computation Conference, GECCO 2021 tenutosi a Lille nel 10 - 14 July, 2021) [10.1145/3449726.3463137].

Black-box adversarial attacks using evolution strategies

Custode L. L.;Iacca G.
2021-01-01

Abstract

In the last decade, deep neural networks have proven to be very powerful in computer vision tasks, starting a revolution in the computer vision and machine learning fields. However, deep neural networks, usually, are not robust to perturbations of the input data. In fact, several studies showed that slightly changing the content of the images can cause a dramatic decrease in the accuracy of the attacked neural network. Several methods able to generate adversarial samples make use of gradients, which usually are not available to an attacker in real-world scenarios. As opposed to this class of attacks, another class of adversarial attacks, called black-box adversarial attacks, emerged, which does not make use of information on the gradients, being more suitable for real-world attack scenarios. In this work, we compare three well-known evolution strategies on the generation of black-box adversarial attacks for image classification tasks. While our results show that the attacked neural networks can be, in most cases, easily fooled by all the algorithms under comparison, they also show that some black-box optimization algorithms may be better in "harder"setups, both in terms of attack success rate and efficiency (i.e., number of queries).
2021
GECCO 2021 Companion - Proceedings of the 2021 Genetic and Evolutionary Computation Conference Companion
New York
Association for Computing Machinery, Inc
9781450383516
Qiu, H.; Custode, L. L.; Iacca, G.
Black-box adversarial attacks using evolution strategies / Qiu, H.; Custode, L. L.; Iacca, G.. - (2021), pp. 1827-1833. (Intervento presentato al convegno 2021 Genetic and Evolutionary Computation Conference, GECCO 2021 tenutosi a Lille nel 10 - 14 July, 2021) [10.1145/3449726.3463137].
File in questo prodotto:
File Dimensione Formato  
2104.15064.pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.26 MB
Formato Adobe PDF
1.26 MB Adobe PDF Visualizza/Apri
wksp120s2-file1.pdf

Solo gestori archivio

Descrizione: first online
Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 1.23 MB
Formato Adobe PDF
1.23 MB Adobe PDF   Visualizza/Apri
3449726.3463137.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 850.9 kB
Formato Adobe PDF
850.9 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/316088
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 16
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact