/TAPA-HiSparse

HiSparse (High-performance Sparse) linear algebra library extension using UCLA TAPA

Primary LanguageC++BSD 3-Clause "New" or "Revised" LicenseBSD-3-Clause

TAPA-HiSparse

HiSparse is a HLS library which targets High-performance Sparse Linear Algebra, such as SpMV. Compared to the version in FPGA'22, current HiSparse is enhanced in many levels such as portability and compatibility (for latest vendor tool), and also equipped with a multi-HBM SpMSpV as a new case study.

TAPA is a dataflow HLS framework from UCLA VAST group, which features fast compilation, expressive programming model and generates high-frequency FPGA accelerators.

This project aims to migrate the latest HiSparse library from vanilla Vitis HLS to TAPA framework, to exert AutoBridge workflow for better floorplan quality and pipelining; in this way, we could get higher frequency and thus higher throughput for sparse computing. Further works still focus on improving HiSparse frequency & scalability (scale to more HBM channels), and the integration to GraphLily (already have some milestones here).

Prerequisites

Basic

  • TAPA framework: 0.0.20220807.1 or later
  • Xilinx Vitis Tool: 2022.1.1
  • Package cnpy: latest

Hardware-specific

Workflow

Hardware

  1. Setup the TAPA and Vitis 2022.1 environments before running run_tapa.sh.

  2. Run run_tapa.sh to start the TAPA and then AutoBridge process. The DSE results will locate on spmv/run/run-* directory.

  3. Enter spmv/run/run-* and run spmv_generate_bitstream.sh to synthesize and implement HW. (Also, in case you use the TAPA w/o Vitis 2022.1 supports, please run sed -i 's/pfm_top_i\/dynamic_region/level0_i\/ulp/g' spmv_floorplan.tcl before generating bitstream.)

  4. Build host and benchmark in spmv/sw via make host benchmark.

Software Emulation

Simply build the host in spmv/sw directory via make host, and execute host directly.

Note

The environment variable DATASETS should be set to the path of datasets (e.g., googleplus, ogbl-ppa, etc.) before running host or bench.sh. The datasets including graph and pruned_nn are available here.