Deep Neural Networks (DNNs) have become an enabling technology for building accurate image classifiers, and are increasingly being applied in many ICT systems such as autonomous vehicles. Unfortunately, classifiers can be deceived by images that are altered due to failures of the visual camera, preventing the proper execution of the classification process. Therefore, it is of utmost importance to build image classifiers that can guarantee accurate classification even in the presence of such camera failures. This study crafts classifiers that are robust to failures of the visual camera by augmenting the training set with artificially altered images that simulate the effects of such failures. Such a data augmentation approach improves classification accuracy with respect to the most common data augmentation approaches, even in the absence of camera failures. To provide experimental evidence for our claims, we exercise three DNN image classifiers on three image datasets, in which we inject the effects of many failures into the visual camera. Finally, we applied eXplainable AI to debate why classifiers trained with the data augmentation approach proposed in this study can tolerate failures of the visual camera.
Tolerate Failures of the Visual Camera With Robust Image Classifiers / Atif, Muhammad; Ceccarelli, Andrea; Zoppi, Tommaso; Bondavalli, Andrea. - In: IEEE ACCESS. - ISSN 2169-3536. - ELETTRONICO. - 11:(2023), pp. 5132-5143. [10.1109/ACCESS.2023.3237394]
Tolerate Failures of the Visual Camera With Robust Image Classifiers
Tommaso Zoppi;
2023-01-01
Abstract
Deep Neural Networks (DNNs) have become an enabling technology for building accurate image classifiers, and are increasingly being applied in many ICT systems such as autonomous vehicles. Unfortunately, classifiers can be deceived by images that are altered due to failures of the visual camera, preventing the proper execution of the classification process. Therefore, it is of utmost importance to build image classifiers that can guarantee accurate classification even in the presence of such camera failures. This study crafts classifiers that are robust to failures of the visual camera by augmenting the training set with artificially altered images that simulate the effects of such failures. Such a data augmentation approach improves classification accuracy with respect to the most common data augmentation approaches, even in the absence of camera failures. To provide experimental evidence for our claims, we exercise three DNN image classifiers on three image datasets, in which we inject the effects of many failures into the visual camera. Finally, we applied eXplainable AI to debate why classifiers trained with the data augmentation approach proposed in this study can tolerate failures of the visual camera.File | Dimensione | Formato | |
---|---|---|---|
10018232.pdf
accesso aperto
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Creative commons
Dimensione
2.25 MB
Formato
Adobe PDF
|
2.25 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione