Computing the matrix square root or its inverse in a differentiable manner is important in a variety of computer vision tasks. Previous methods either adopt the Singular Value Decomposition (SVD) to explicitly factorize the matrix or use the Newton-Schulz iteration (NS iteration) to derive the approximate solution. However, both methods are not computationally efficient enough in either the forward pass or in the backward pass. In this paper, we propose two more efficient variants to compute the differentiable matrix square root. For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP), and the other method is to use Matrix Pade Approximants (MPA). The backward gradient is computed by ´ iteratively solving the continuous-time Lyapunov equation using the matrix sign function. Both methods yield considerable speed-up compared with the SVD or the Newton-Schulz iteration. Experimental results on the de-correlated batch normalization and second-order vision transformer demonstrate that our methods can also achieve competitive and even slightly better performances. The code is available at https://github.com/KingJamesSong/FastDifferentiableMatSqrt.

Fast Differentiable Matrix Square Root / Song, Yue; Sebe, Niculae; Wang, Wei. - (2022), pp. 1-19. (Intervento presentato al convegno The Tenth International Conference on Learning Representations (ICLR’22) tenutosi a virtual nel 25- 29 April, 2022).

Fast Differentiable Matrix Square Root

Yue Song;Nicu Sebe;Wei Wang
2022-01-01

Abstract

Computing the matrix square root or its inverse in a differentiable manner is important in a variety of computer vision tasks. Previous methods either adopt the Singular Value Decomposition (SVD) to explicitly factorize the matrix or use the Newton-Schulz iteration (NS iteration) to derive the approximate solution. However, both methods are not computationally efficient enough in either the forward pass or in the backward pass. In this paper, we propose two more efficient variants to compute the differentiable matrix square root. For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP), and the other method is to use Matrix Pade Approximants (MPA). The backward gradient is computed by ´ iteratively solving the continuous-time Lyapunov equation using the matrix sign function. Both methods yield considerable speed-up compared with the SVD or the Newton-Schulz iteration. Experimental results on the de-correlated batch normalization and second-order vision transformer demonstrate that our methods can also achieve competitive and even slightly better performances. The code is available at https://github.com/KingJamesSong/FastDifferentiableMatSqrt.
2022
International Conference on Learning Representation (ICLR’22)
S.l.
Openreview
Song, Yue; Sebe, Niculae; Wang, Wei
Fast Differentiable Matrix Square Root / Song, Yue; Sebe, Niculae; Wang, Wei. - (2022), pp. 1-19. (Intervento presentato al convegno The Tenth International Conference on Learning Representations (ICLR’22) tenutosi a virtual nel 25- 29 April, 2022).
File in questo prodotto:
File Dimensione Formato  
352_fast_differentiable1.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 9.82 MB
Formato Adobe PDF
9.82 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/361304
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? ND
social impact