Spiking neural networks (SNNs) are considered as a candidate for efficient deep learning systems: these networks communicate with 0 or 1 spikes and their computations do not require the multiply operation. On the other hand, SNNs still have large memory overhead and poor utilization of the memory hierarchy; powerful SNN has large memory requirements and requires multiple inference steps with dynamic memory patterns. This paper proposes performing the image classification task as collaborative tasks of specialized SNNs. This specialization allows us to significantly reduce the number of memory operations and improve the utilization of memory hierarchy. Our results show that the proposed approach improves the energy and latency of SNNs inference by more than 10x. In addition, our work shows that designing narrow (and deep) SNNs is computationally more efficient than designing wide (and shallow) SNNs.

Efficient Processing of Spiking Neural Networks via Task Specialization / Lebdeh, Muath Abu; Yildirim, Kasim Sinan; Brunelli, Davide. - In: IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE. - ISSN 2471-285X. - 2024:(2024), pp. 1-11. [10.1109/tetci.2024.3370028]

Efficient Processing of Spiking Neural Networks via Task Specialization

Lebdeh, Muath Abu
;
Yildirim, Kasim Sinan;Brunelli, Davide
2024-01-01

Abstract

Spiking neural networks (SNNs) are considered as a candidate for efficient deep learning systems: these networks communicate with 0 or 1 spikes and their computations do not require the multiply operation. On the other hand, SNNs still have large memory overhead and poor utilization of the memory hierarchy; powerful SNN has large memory requirements and requires multiple inference steps with dynamic memory patterns. This paper proposes performing the image classification task as collaborative tasks of specialized SNNs. This specialization allows us to significantly reduce the number of memory operations and improve the utilization of memory hierarchy. Our results show that the proposed approach improves the energy and latency of SNNs inference by more than 10x. In addition, our work shows that designing narrow (and deep) SNNs is computationally more efficient than designing wide (and shallow) SNNs.
2024
Lebdeh, Muath Abu; Yildirim, Kasim Sinan; Brunelli, Davide
Efficient Processing of Spiking Neural Networks via Task Specialization / Lebdeh, Muath Abu; Yildirim, Kasim Sinan; Brunelli, Davide. - In: IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE. - ISSN 2471-285X. - 2024:(2024), pp. 1-11. [10.1109/tetci.2024.3370028]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/405069
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact