Deep Neural Networks (DNNs) are the preferred choice for image-based machine learning applications in several domains. However, DNNs are vulnerable to adversarial attacks, that are carefully-crafted perturbations introduced on input images to fool a DNN model. Adversarial attacks may prevent the application of DNNs in security-critical tasks: consequently, relevant research effort is put in securing DNNs. Typical approaches either increase model robustness, or add detection capabilities in the model, or operate on the input data. Instead, in this paper we propose to detect ongoing attacks through monitoring performance indicators of the underlying Graphics Processing Unit (GPU). In fact, adversarial attacks generate images that activate neurons of DNNs in a different way than legitimate images. This also causes an alteration of GPU activities, that can be observed through software monitors and anomaly detectors. This paper presents our monitoring and detection system, and an extensive experimental analysis that includes a total of 14 adversarial attacks, 3 datasets, and 12 models. Results show that, despite limitations on the monitoring resolution, adversarial attacks can be detected in most cases, with peaks of detection accuracy above 90%.

Detect Adversarial Attacks against Deep Neural Networks with GPU Monitoring / Ceccarelli, A.; Zoppi, T.. - In: IEEE ACCESS. - ISSN 2169-3536. - ELETTRONICO. - 9:(2021), pp. 150579-150591. [10.1109/ACCESS.2021.3125920]

Detect Adversarial Attacks against Deep Neural Networks with GPU Monitoring

Zoppi T.
2021-01-01

Abstract

Deep Neural Networks (DNNs) are the preferred choice for image-based machine learning applications in several domains. However, DNNs are vulnerable to adversarial attacks, that are carefully-crafted perturbations introduced on input images to fool a DNN model. Adversarial attacks may prevent the application of DNNs in security-critical tasks: consequently, relevant research effort is put in securing DNNs. Typical approaches either increase model robustness, or add detection capabilities in the model, or operate on the input data. Instead, in this paper we propose to detect ongoing attacks through monitoring performance indicators of the underlying Graphics Processing Unit (GPU). In fact, adversarial attacks generate images that activate neurons of DNNs in a different way than legitimate images. This also causes an alteration of GPU activities, that can be observed through software monitors and anomaly detectors. This paper presents our monitoring and detection system, and an extensive experimental analysis that includes a total of 14 adversarial attacks, 3 datasets, and 12 models. Results show that, despite limitations on the monitoring resolution, adversarial attacks can be detected in most cases, with peaks of detection accuracy above 90%.
2021
Ceccarelli, A.; Zoppi, T.
Detect Adversarial Attacks against Deep Neural Networks with GPU Monitoring / Ceccarelli, A.; Zoppi, T.. - In: IEEE ACCESS. - ISSN 2169-3536. - ELETTRONICO. - 9:(2021), pp. 150579-150591. [10.1109/ACCESS.2021.3125920]
File in questo prodotto:
File Dimensione Formato  
Detect_Adversarial_Attacks_Against_Deep_Neural_Networks_With_GPU_Monitoring.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 2.49 MB
Formato Adobe PDF
2.49 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/390275
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
  • OpenAlex ND
social impact