Deep learning has unequivocally emerged as the backbone of simple to highly sensitive systems demanding artificial intelligence across diverse domains. For instance, foundation models based on deep neural networks (DNNs) can play a crucial role in the design of security-sensitive systems, such as facial recognition systems (FRS). Despite achieving exceptional accuracy and human-like performance, DNNs tend to be severely sensitive to adversarial attacks. While DNNs are deemed irreplaceable in the artificial intelligence domain, the vulnerability of DNNs against adversarial examples can be detrimental to sensitive systems robustness. The paper presents a pilot study introducing an attack-defense framework aimed at enhancing the robustness of FRS against evasion attacks. Our generative adversarial network (GAN) based attack successfully deceives FRS, demonstrating that they are not only vulnerable against synthetic images visibly comparable to real user images (i.e., best-match scenarios) but also to partially constructed user images (i.e., average-match scenarios). Based on our analysis, we propose a novel solution that extends the visual prompt engineering (VPE) concept for detecting synthetic images to secure downstream tasks in FRS. The VPE detection module achieves an accuracy of 97.92% in the average-match scenario and 87.08% in the best-match scenario on our generated dataset. Furthermore, we use the Trueface postsocial dataset to validate the efficacy of the detection module obtaining an accuracy of 91.96%. Our experimental evaluation shows that VPE can effectively tackle the GAN attacks from average-match to best-match scenarios, thus enhancing the overall robustness of a security-sensitive system against evasion attacks.

Visual Prompt Engineering for Enhancing Facial Recognition Systems Robustness Against Evasion Attacks / Gupta, Sandeep; Raja, Kiran; Passerone, Roberto. - In: IEEE ACCESS. - ISSN 2169-3536. - 2024:(2024), pp. 152212-152223. [10.1109/ACCESS.2024.3479949]

Visual Prompt Engineering for Enhancing Facial Recognition Systems Robustness Against Evasion Attacks

Sandeep Gupta;Roberto Passerone
2024-01-01

Abstract

Deep learning has unequivocally emerged as the backbone of simple to highly sensitive systems demanding artificial intelligence across diverse domains. For instance, foundation models based on deep neural networks (DNNs) can play a crucial role in the design of security-sensitive systems, such as facial recognition systems (FRS). Despite achieving exceptional accuracy and human-like performance, DNNs tend to be severely sensitive to adversarial attacks. While DNNs are deemed irreplaceable in the artificial intelligence domain, the vulnerability of DNNs against adversarial examples can be detrimental to sensitive systems robustness. The paper presents a pilot study introducing an attack-defense framework aimed at enhancing the robustness of FRS against evasion attacks. Our generative adversarial network (GAN) based attack successfully deceives FRS, demonstrating that they are not only vulnerable against synthetic images visibly comparable to real user images (i.e., best-match scenarios) but also to partially constructed user images (i.e., average-match scenarios). Based on our analysis, we propose a novel solution that extends the visual prompt engineering (VPE) concept for detecting synthetic images to secure downstream tasks in FRS. The VPE detection module achieves an accuracy of 97.92% in the average-match scenario and 87.08% in the best-match scenario on our generated dataset. Furthermore, we use the Trueface postsocial dataset to validate the efficacy of the detection module obtaining an accuracy of 91.96%. Our experimental evaluation shows that VPE can effectively tackle the GAN attacks from average-match to best-match scenarios, thus enhancing the overall robustness of a security-sensitive system against evasion attacks.
2024
Gupta, Sandeep; Raja, Kiran; Passerone, Roberto
Visual Prompt Engineering for Enhancing Facial Recognition Systems Robustness Against Evasion Attacks / Gupta, Sandeep; Raja, Kiran; Passerone, Roberto. - In: IEEE ACCESS. - ISSN 2169-3536. - 2024:(2024), pp. 152212-152223. [10.1109/ACCESS.2024.3479949]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/440512
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact