Machine learning models have been extensively proposed for classifying network flows as benign or malicious, either in-network or at the endpoints of the infrastructure. Typically, the performance of such models is assessed by evaluating the trained model against a portion of the available dataset. However, in a production scenario, these models are fed by a monitoring stage that collects information from flows and provides inputs to a filtering stage that eventually blocks malicious traffic. To the best of our knowledge, no work has analysed the entire pipeline, focusing on its performance in terms of both inputs (i.e., the information collected from each flow) and outputs (i.e., the system's ability to prevent an attack from reaching the application layer).In this paper, we propose a methodology for evaluating the effectiveness of a Network Intrusion Detection System (NIDS) by placing the model evaluation test alongside an online test that simulates the entire monitoring-detection-mi...

Machine learning models have been extensively proposed for classifying network flows as benign or malicious, either in-network or at the endpoints of the infrastructure. Typically, the performance of such models is assessed by evaluating the trained model against a portion of the available dataset. However, in a production scenario, these models are fed by a monitoring stage that collects information from flows and provides inputs to a filtering stage that eventually blocks malicious traffic. To the best of our knowledge, no work has analysed the entire pipeline, focusing on its performance in terms of both inputs (i.e., the information collected from each flow) and outputs (i.e., the system’s ability to prevent an attack from reaching the application layer).In this paper, we propose a methodology for evaluating the effectiveness of a Network Intrusion Detection System (NIDS) by placing the model evaluation test alongside an online test that simulates the entire monitoring-detection-mitigation pipeline. We assess the system’s outputs based on different input configurations, using state-of-the-art detection models and datasets. Our results highlight the importance of inputs for the throughput of the NIDS, which can decrease by more than 50% with heavier configurations. Furthermore, our research indicates that relying solely on the performance of the detection model may not be enough to evaluate the effectiveness of the entire NIDS process. Indeed, even when achieving near-optimal False Negative Rate (FNR) values (e.g., 0.01), a substantial amount of malicious traffic (e.g., 70%) may still successfully reach its target.

Enhancing Network Intrusion Detection: An Online Methodology for Performance Analysis / Magnani, Simone; Doriguzzi-Corin, Roberto; Siracusa, Domenico. - (2023), pp. 510-515. (Intervento presentato al convegno 9th IEEE International Conference on Network Softwarization, NetSoft 2023 tenutosi a Madrid, Spain nel 19-23 June 2023) [10.1109/NetSoft57336.2023.10175465].

Enhancing Network Intrusion Detection: An Online Methodology for Performance Analysis

Roberto Doriguzzi-Corin;Domenico Siracusa
2023-01-01

Abstract

Machine learning models have been extensively proposed for classifying network flows as benign or malicious, either in-network or at the endpoints of the infrastructure. Typically, the performance of such models is assessed by evaluating the trained model against a portion of the available dataset. However, in a production scenario, these models are fed by a monitoring stage that collects information from flows and provides inputs to a filtering stage that eventually blocks malicious traffic. To the best of our knowledge, no work has analysed the entire pipeline, focusing on its performance in terms of both inputs (i.e., the information collected from each flow) and outputs (i.e., the system's ability to prevent an attack from reaching the application layer).In this paper, we propose a methodology for evaluating the effectiveness of a Network Intrusion Detection System (NIDS) by placing the model evaluation test alongside an online test that simulates the entire monitoring-detection-mi...
2023
Proceedings of 2023 IEEE 9th International Conference on Network Softwarization (NetSoft)
345 E 47TH ST, NEW YORK, NY 10017 USA
IEEE (Institute of Electrical and Electronics Engineers)
9798350399806
Magnani, Simone; Doriguzzi-Corin, Roberto; Siracusa, Domenico
Enhancing Network Intrusion Detection: An Online Methodology for Performance Analysis / Magnani, Simone; Doriguzzi-Corin, Roberto; Siracusa, Domenico. - (2023), pp. 510-515. (Intervento presentato al convegno 9th IEEE International Conference on Network Softwarization, NetSoft 2023 tenutosi a Madrid, Spain nel 19-23 June 2023) [10.1109/NetSoft57336.2023.10175465].
File in questo prodotto:
File Dimensione Formato  
magnaniEnhancingNetworkIntrusion[AAM].pdf

accesso aperto

Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 472.13 kB
Formato Adobe PDF
472.13 kB Adobe PDF Visualizza/Apri
magnaniEnhancingNetworkIntrusion[VoR].pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 523.56 kB
Formato Adobe PDF
523.56 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/446811
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact