Spiking neural networks (SNNs) are considered as a candidate for efficient deep learning systems: these networks communicate with 0 or 1 spikes and their computations do not require the multiply operation. On the other hand, SNNs still have large memory overhead and poor utilization of the memory hierarchy; powerful SNN has large memory requirements and requires multiple inference steps with dynamic memory patterns. This paper proposes performing the image classification task as collaborative tasks of specialized SNNs. This specialization allows us to significantly reduce the number of memory operations and improve the utilization of memory hierarchy. Our results show that the proposed approach improves the energy and latency of SNNs inference by more than 10x. In addition, our work shows that designing narrow (and deep) SNNs is computationally more efficient than designing wide (and shallow) SNNs.

Efficient Processing of Spiking Neural Networks via Task Specialization / Lebdeh, Muath Abu; Yildirim, Kasim Sinan; Brunelli, Davide. - In: IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE. - ISSN 2471-285X. - 2024:(2024), pp. 1-11. [10.1109/tetci.2024.3370028]

Efficient Processing of Spiking Neural Networks via Task Specialization

Lebdeh, Muath Abu
Primo
;
Yildirim, Kasim Sinan
Secondo
;
Brunelli, Davide
Ultimo
2024-01-01

Abstract

Spiking neural networks (SNNs) are considered as a candidate for efficient deep learning systems: these networks communicate with 0 or 1 spikes and their computations do not require the multiply operation. On the other hand, SNNs still have large memory overhead and poor utilization of the memory hierarchy; powerful SNN has large memory requirements and requires multiple inference steps with dynamic memory patterns. This paper proposes performing the image classification task as collaborative tasks of specialized SNNs. This specialization allows us to significantly reduce the number of memory operations and improve the utilization of memory hierarchy. Our results show that the proposed approach improves the energy and latency of SNNs inference by more than 10x. In addition, our work shows that designing narrow (and deep) SNNs is computationally more efficient than designing wide (and shallow) SNNs.
2024
Lebdeh, Muath Abu; Yildirim, Kasim Sinan; Brunelli, Davide
Efficient Processing of Spiking Neural Networks via Task Specialization / Lebdeh, Muath Abu; Yildirim, Kasim Sinan; Brunelli, Davide. - In: IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE. - ISSN 2471-285X. - 2024:(2024), pp. 1-11. [10.1109/tetci.2024.3370028]
File in questo prodotto:
File Dimensione Formato  
Efficient_Processing_of_Spiking_Neural_Networks_via_Task_Specialization.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 4.3 MB
Formato Adobe PDF
4.3 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/405069
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact