Routines Accelerated by the CUDA Package
<Text-field style="Heading 2" layout="Heading 2" bookmark="bkmrk0">Supported Routines</Text-field>
This page describes the routines that are accelerated by turning on Compute Unified Device Architecture (CUDA) acceleration.
Some routines can be accelerated on certain hardware versions only. In particular, early CUDA-enabled hardware does not support double precision (float[8]) routines. For more information, see Supported Hardware for CUDA Acceleration.
<Text-field style="Heading 3" layout="Heading 3" bookmark="bkmrk1">LinearAlgebra[MatrixMatrixMultiply] </Text-field>
Matrix Matrix multiplication is accelerated for the following Matrix types:
The two matrices must be of the same data type. If the CUDA hardware has a compute level of 1.2 or lower, the data type must be float[4]. If the compute level is 1.3 or higher, float[8] and complex[8] are also accelerated.
The two matrices must have rectangular storage and their shape can be symmetric.
See AlsoCUDALinearAlgebra[MatrixMatrixMultiply]Supported Hardware for CUDA Acceleration