Security vulnerabilities are weaknesses of software due for instance to design flaws or implementation bugs that can be exploited and lead to potentially devastating security breaches. Traditionally, static code analysis is recognized as effective in the detection of software security vulnerabilities but at the expense of a high human effort required for checking a large number of produced false positive cases. Deep-learning methods have been recently proposed to overcome such a limitation of static code analysis and detect the vulnerable code by using vulnerability-related patterns learned from large source code datasets. However, the use of these methods for localizing the causes of the vulnerability in the source code, i.e., localize the statements that contain the bugs, has not been extensively explored. In this work, we experiment the use of deep-learning and explainability methods for detecting and localizing vulnerability-related statements in code fragments (named snippets). We aim at understanding if the code features adopted by deep-learning methods to identify vulnerable code snippets can also support the developers in debugging the code, thus localizing the vulnerability's cause. Our work shows that deep-learning methods can be effective in detecting the vulnerable code snippets, under certain conditions, but the code features that such methods use can only partially face the actual causes of the vulnerabilities in the code.

Can explainability and deep-learning be used for localizing vulnerabilities in source code? / Marchetto, Alessandro. - (2024), pp. 110-119. ( 5th ACM/IEEE International Conference on Automation of Software Test, AST 2024, co-located with the 46th International Conference on Software Engineering, ICSE 2024 Lisbon, Portugal April 15 - 16, 2024) [10.1145/3644032.3644448].

Can explainability and deep-learning be used for localizing vulnerabilities in source code?

Marchetto, Alessandro
Primo
2024-01-01

Abstract

Security vulnerabilities are weaknesses of software due for instance to design flaws or implementation bugs that can be exploited and lead to potentially devastating security breaches. Traditionally, static code analysis is recognized as effective in the detection of software security vulnerabilities but at the expense of a high human effort required for checking a large number of produced false positive cases. Deep-learning methods have been recently proposed to overcome such a limitation of static code analysis and detect the vulnerable code by using vulnerability-related patterns learned from large source code datasets. However, the use of these methods for localizing the causes of the vulnerability in the source code, i.e., localize the statements that contain the bugs, has not been extensively explored. In this work, we experiment the use of deep-learning and explainability methods for detecting and localizing vulnerability-related statements in code fragments (named snippets). We aim at understanding if the code features adopted by deep-learning methods to identify vulnerable code snippets can also support the developers in debugging the code, thus localizing the vulnerability's cause. Our work shows that deep-learning methods can be effective in detecting the vulnerable code snippets, under certain conditions, but the code features that such methods use can only partially face the actual causes of the vulnerabilities in the code.
2024
AST '24: Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024)
United States
ACM
979-8-4007-0588-5
Marchetto, Alessandro
Can explainability and deep-learning be used for localizing vulnerabilities in source code? / Marchetto, Alessandro. - (2024), pp. 110-119. ( 5th ACM/IEEE International Conference on Automation of Software Test, AST 2024, co-located with the 46th International Conference on Software Engineering, ICSE 2024 Lisbon, Portugal April 15 - 16, 2024) [10.1145/3644032.3644448].
File in questo prodotto:
File Dimensione Formato  
3644032.3644448.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 981.08 kB
Formato Adobe PDF
981.08 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/415710
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 4
  • OpenAlex ND
social impact