Specializing the dataset to train narrow spiking neural networks (SNNs) was recently proposed as an efficient approach for processing SNNs. This approach mainly reduces the memory overhead of SNNs, improving the overall processing efficiency and the hardware cost. However, task specialization using narrow independent SNNs leads to a non-negligible degradation in accuracy in some applications. In addition, task specialization in SNNs requires a huge training burden as well as human expertise to design the specialized tasks. In this paper, we propose the use of gated and specialized layers in SNNs to reduce the memory overhead, while maintaining the state-of-the-art accuracy. The proposed solution downsizes the width of an SNN per layer, reuses some specialized units by different classes, and eliminates the training burden that existed in the previous work. Our results show an improvement in inference processing efficiency on a real general-purpose hardware of up to 3x while maintaining ...

Per Layer Specialization for Memory-Efficient Spiking Neural Networks / Lebdeh, Muath Abu; Yildirim, Kasim Sinan; Brunelli, Davide. - (2024), pp. 169-176. ( ICONS Arlington, VA, USA 30 July 2024 - 02 August 2024) [10.1109/icons62911.2024.00032].

Per Layer Specialization for Memory-Efficient Spiking Neural Networks

Lebdeh, Muath Abu;Yildirim, Kasim Sinan;Brunelli, Davide
2024-01-01

Abstract

Specializing the dataset to train narrow spiking neural networks (SNNs) was recently proposed as an efficient approach for processing SNNs. This approach mainly reduces the memory overhead of SNNs, improving the overall processing efficiency and the hardware cost. However, task specialization using narrow independent SNNs leads to a non-negligible degradation in accuracy in some applications. In addition, task specialization in SNNs requires a huge training burden as well as human expertise to design the specialized tasks. In this paper, we propose the use of gated and specialized layers in SNNs to reduce the memory overhead, while maintaining the state-of-the-art accuracy. The proposed solution downsizes the width of an SNN per layer, reuses some specialized units by different classes, and eliminates the training burden that existed in the previous work. Our results show an improvement in inference processing efficiency on a real general-purpose hardware of up to 3x while maintaining ...
2024
2024 International Conference on Neuromorphic Systems (ICONS)
Arlington, VA, USA
IEEE
9798350368659
979-8-3503-6866-6
Lebdeh, Muath Abu; Yildirim, Kasim Sinan; Brunelli, Davide
Per Layer Specialization for Memory-Efficient Spiking Neural Networks / Lebdeh, Muath Abu; Yildirim, Kasim Sinan; Brunelli, Davide. - (2024), pp. 169-176. ( ICONS Arlington, VA, USA 30 July 2024 - 02 August 2024) [10.1109/icons62911.2024.00032].
File in questo prodotto:
File Dimensione Formato  
published_version.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 917.32 kB
Formato Adobe PDF
917.32 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/439813
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex 0
social impact