To respond to the intense computational load of deep neural networks, a plethora of domain-specific architectures have been introduced, such as Google Tensor Processing Units and NVIDIA Tensor Cores. A common feature of these architectures is a hardware circuit for efficiently computing a dense matrix multiplication of a given small size. In order to broaden the class of algorithms that exploit these systems, we propose a computational model, named the TCU model, that captures the ability to natively multiply small matrices. We then use the TCU model for designing fast algorithms for several problems, including matrix operations (dense and sparse multiplication, Gaussian Elimination), graph algorithms (transitive closure, all pairs shortest distances), Discrete Fourier Transform, stencil computations, integer multiplication, and polynomial evaluation. We finally highlight a relation between the TCU model and the external memory model.

Algorithm Design for Tensor Units / Chowdhury, R.; Silvestri, F.; Vella, F.. - ELETTRONICO. - 12820:(2021), pp. 353-367. (Intervento presentato al convegno 27th International European Conference on Parallel and Distributed Computing, Euro-Par 2021 tenutosi a Lisbona, Portugal nel 1 – 3 September 2021) [10.1007/978-3-030-85665-6_22].

Algorithm Design for Tensor Units

Vella F.
2021-01-01

Abstract

To respond to the intense computational load of deep neural networks, a plethora of domain-specific architectures have been introduced, such as Google Tensor Processing Units and NVIDIA Tensor Cores. A common feature of these architectures is a hardware circuit for efficiently computing a dense matrix multiplication of a given small size. In order to broaden the class of algorithms that exploit these systems, we propose a computational model, named the TCU model, that captures the ability to natively multiply small matrices. We then use the TCU model for designing fast algorithms for several problems, including matrix operations (dense and sparse multiplication, Gaussian Elimination), graph algorithms (transitive closure, all pairs shortest distances), Discrete Fourier Transform, stencil computations, integer multiplication, and polynomial evaluation. We finally highlight a relation between the TCU model and the external memory model.
2021
Euro-Par 2021: Parallel Processing
GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND
Springer
978-3-030-85664-9
978-3-030-85665-6
Chowdhury, R.; Silvestri, F.; Vella, F.
Algorithm Design for Tensor Units / Chowdhury, R.; Silvestri, F.; Vella, F.. - ELETTRONICO. - 12820:(2021), pp. 353-367. (Intervento presentato al convegno 27th International European Conference on Parallel and Distributed Computing, Euro-Par 2021 tenutosi a Lisbona, Portugal nel 1 – 3 September 2021) [10.1007/978-3-030-85665-6_22].
File in questo prodotto:
File Dimensione Formato  
Chowdhury2021_Chapter_AlgorithmDesignForTensorUnits.pdf

Solo gestori archivio

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 718.73 kB
Formato Adobe PDF
718.73 kB Adobe PDF   Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/332834
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 2
  • OpenAlex ND
social impact