gemm-optimization
There are 14 repositories under gemm-optimization topic.
tpoisonooo/how-to-optimize-gemm
row-major matmul optimization
iVishalr/GEMM
Fast Matrix Multiplication Implementation in C programming language. This matrix multiplication algorithm is similar to what Numpy uses to compute dot products.
mz24cn/gemm_optimization
The repository targets the OpenCL gemm function performance optimization. It compares several libraries clBLAS, clBLAST, MIOpenGemm, Intel MKL(CPU) and cuBLAS(CUDA) on different matrix sizes/vendor's hardwares/OS. Out-of-the-box easy as MSVC, MinGW, Linux(CentOS) x86_64 binary provided. 在不同矩阵大小/硬件/操作系统下比较几个BLAS库的sgemm函数性能,提供binary,开盒即用。
xziya/gemm-opt
Manually optimize the GEMM (GEneral Matrix Multiply) operation. There is a long way to go.
digital-nomad-cheng/matmul_cuda_kernel_tvm
Generate optimized MatMul cuda kernel automatically using tvm auto schedule.
fspiga/phiGEMM
phiGEMM: CPU-GPU hybrid matrix-matrix multiplication library
fan1997/HP-SpMM-SDDMM
Fast SpMM implementation on GPUs for GNN (IPDPS'23)
marina-neseem/Accera-High-Perf-DL
Case Studies for using Accera - the open source cross-platform compiler from Microsoft Research - to create high performance deep learning computations (i.e. GEMM, Convolution, etc.)
hwchen2017/Optimize_DGEMM_on_Intel_CPU
Implementations of DGEMM algorithm using different tricks to optimize the performance.
JoeruCodes/CUDA-GEMM-kernel
My attempt of making a GEMM kernel...
scocoyash/Convolution-To-Gemm
My experiments with convolution
xylcbd/gemm_base
gemm baseline code.
hpca-uji/ConvLIB
ConvLIB is a library of convolution kernels for multicore processors with ARM (NEON) or RISC-V architecture
hwchen2017/Optimize_SGEMM_on_Nvidia_GPU
Implementations of SGEMM algorithm on Nvidia GPU using different tricks to optimize the performance.