Training quantised neural networks (QNNs) is a non-differentiable optimisation problem since weights and features are output by piecewise constant functions. The standard solution is to apply the straight-through estimator (STE), using different functions during the inference and gradient computation steps. Several STE variants have been proposed in the literature aiming to maximise the task accuracy of the trained network. In this paper, we analyse STE variants and study their impact on QNN training. We first observe that most such variants can be modelled as stochastic regularisations of stair functions; although this intuitive interpretation is not new, our rigorous discussion generalises to further variants. Then, we analyse QNNs mixing different regularisations, finding that some suitably synchronised smoothing of each layer map is required to guarantee pointwise compositional convergence to the target discontinuous function. Based on these theoretical insights, we propose additive noise annealing (ANA), a new algorithm to train QNNs encompassing standard STE and its variants as special cases. When testing ANA on the CIFAR-J0 image classification benchmark, we find that the major impact on task accuracy is not due to the qualitative shape of the regularisations but to the proper synchronisation of the different STE variants used in a network, in accordance with the theoretical results.

Training Quantized Neural Networks with STE variants: the Additive Noise Annealing Algorithm / Spallanzani, Matteo; Leonardi, Gian Paolo; Benini, Luca. - ELETTRONICO. - (2022), pp. 470-479. (Intervento presentato al convegno CVPR 2022 tenutosi a New Orleans, LA, USA nel 18-24 June 2022) [10.1109/CVPR52688.2022.00056].

Training Quantized Neural Networks with STE variants: the Additive Noise Annealing Algorithm

Leonardi, Gian Paolo
Secondo
;
2022-01-01

Abstract

Training quantised neural networks (QNNs) is a non-differentiable optimisation problem since weights and features are output by piecewise constant functions. The standard solution is to apply the straight-through estimator (STE), using different functions during the inference and gradient computation steps. Several STE variants have been proposed in the literature aiming to maximise the task accuracy of the trained network. In this paper, we analyse STE variants and study their impact on QNN training. We first observe that most such variants can be modelled as stochastic regularisations of stair functions; although this intuitive interpretation is not new, our rigorous discussion generalises to further variants. Then, we analyse QNNs mixing different regularisations, finding that some suitably synchronised smoothing of each layer map is required to guarantee pointwise compositional convergence to the target discontinuous function. Based on these theoretical insights, we propose additive noise annealing (ANA), a new algorithm to train QNNs encompassing standard STE and its variants as special cases. When testing ANA on the CIFAR-J0 image classification benchmark, we find that the major impact on task accuracy is not due to the qualitative shape of the regularisations but to the proper synchronisation of the different STE variants used in a network, in accordance with the theoretical results.
2022
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Piscataway, NJ (USA)
IEEE
9781665469463
Spallanzani, Matteo; Leonardi, Gian Paolo; Benini, Luca
Training Quantized Neural Networks with STE variants: the Additive Noise Annealing Algorithm / Spallanzani, Matteo; Leonardi, Gian Paolo; Benini, Luca. - ELETTRONICO. - (2022), pp. 470-479. (Intervento presentato al convegno CVPR 2022 tenutosi a New Orleans, LA, USA nel 18-24 June 2022) [10.1109/CVPR52688.2022.00056].
File in questo prodotto:
File Dimensione Formato  
ana.pdf

Solo gestori archivio

Descrizione: main file
Tipologia: Post-print referato (Refereed author’s manuscript)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 10.06 MB
Formato Adobe PDF
10.06 MB Adobe PDF   Visualizza/Apri
Training_Quantised_Neural_Networks_with_STE_Variants_the_Additive_Noise_Annealing_Algorithm.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 4.86 MB
Formato Adobe PDF
4.86 MB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/400932
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 2
  • OpenAlex ND
social impact