/bhSPARSE

bhSPARSE: A Sparse BLAS Library

MIT LicenseMIT

bhSPARSE: A Sparse BLAS Library



Introduction

The bhSPARSE provides basic linear algebra subroutines (BLAS) used for sparse matrix computations on heterogeneous parallel processors. Currently, the bhSPARSE library is under-developing. However, some important building blocks have their source code available.



SpMV and SpGEMM for Benchmarking

1. Sparse Matrix-Vector Multiplication (SpMV) on Intel CPUs, nVidia GPUs, AMD GPUs and Intel Xeon Phi using the CSR5 format

Code repository: https://github.com/bhSPARSE/Benchmark_SpMV_using_CSR5

Paper: Weifeng Liu and Brian Vinter, "CSR5: An Efficient Storage Format for Cross-Platform Sparse Matrix-Vector Multiplication". In Proceedings of the 29th ACM international conference on Supercomputing (ICS '15), pp.339-350, 2015. [pdf][slides]

2. Sparse Matrix-Vector Multiplication (SpMV) on Intel, AMD and nVidia heterogeneous processors using the CSR format

Code repository: https://github.com/bhSPARSE/Benchmark_SpMV_using_CSR

Paper: Weifeng Liu and Brian Vinter, "Speculative Segmented Sum for Sparse Matrix-Vector Multiplication on Heterogeneous Processors". Parallel Computing, pp.179-193, Volume 49, November 2015. [pdf]

3. Sparse Matrix-Matrix Multiplication (SpGEMM) on GPUs and Heterogeneous Processors using the CSR format

Code repository: https://github.com/bhSPARSE/Benchmark_SpGEMM_using_CSR

Paper (1): Weifeng Liu and Brian Vinter, "An Efficient GPU General Sparse Matrix-Matrix Multiplication for Irregular Data" Parallel and Distributed Processing Symposium, 2014 IEEE 28th International (IPDPS '14), pp.370-381, 19-23 May 2014. [pdf][slides]

Paper (2): Weifeng Liu and Brian Vinter, "A Framework for General Sparse Matrix-Matrix Multiplication on GPUs and Heterogeneous Processors". Journal of Parallel and Distributed Computing (JPDC), pp.47-61, Volume 85, November 2015. (Extended version of the IPDPS '14 paper) [pdf]



Contact

Weifeng Liu and Brian Vinter (vinter at nbi.ku.dk).