To respond to the need for efficient training and inference of deep neural networks, a plethora of domain-specific architectures have been introduced, such as Google Tensor Processing Units and NVIDIA Tensor Cores. A common feature of these architectures is the design for efficiently computing a dense matrix product of a given small size. In order to broaden the class of algorithms that exploit these systems, we propose a computational model, named the TCU model, that captures the ability to natively multiply small matrices. We then use the TCU model for designing fast algorithms for several problems, including dense and sparse matrix multiplication and the Discrete Fourier Transform. We finally highlight a relation between the TCU model and the external memory model.
A Computational Model for Tensor Core Units / Chowdhury, R.; Silvestri, F.; Vella, F.. - ELETTRONICO. - (2020), pp. 519-521. (Intervento presentato al convegno 32nd ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2020 tenutosi a usa nel 2020) [10.1145/3350755.3400252].
A Computational Model for Tensor Core Units
Vella F.
2020-01-01
Abstract
To respond to the need for efficient training and inference of deep neural networks, a plethora of domain-specific architectures have been introduced, such as Google Tensor Processing Units and NVIDIA Tensor Cores. A common feature of these architectures is the design for efficiently computing a dense matrix product of a given small size. In order to broaden the class of algorithms that exploit these systems, we propose a computational model, named the TCU model, that captures the ability to natively multiply small matrices. We then use the TCU model for designing fast algorithms for several problems, including dense and sparse matrix multiplication and the Discrete Fourier Transform. We finally highlight a relation between the TCU model and the external memory model.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione