Pinned Repositories
AIX360
Open Source library to support interpretability and explainability of data and machine learning models
basecalling_architectures
bonito_update
A PyTorch Basecaller for Oxford Nanopore Reads
dorado
Oxford Nanopore's Basecaller
embeddedsw
Xilinx Embedded Software (embeddedsw) Development
HLS_BLSTM
The community version of HLS_BLSTM (A BLSTM FPGA accelerator of an OCR appilcation, using CAPI/SNAP))
mlir-aie_strix
An MLIR-based toolchain for Xilinx Versal AIEngine-based devices.
mlir-air
nn-Meter
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.
singagan's Repositories
singagan/AIX360
Open Source library to support interpretability and explainability of data and machine learning models
singagan/basecalling_architectures
singagan/bonito_update
A PyTorch Basecaller for Oxford Nanopore Reads
singagan/dorado
Oxford Nanopore's Basecaller
singagan/embeddedsw
Xilinx Embedded Software (embeddedsw) Development
singagan/HLS_BLSTM
The community version of HLS_BLSTM (A BLSTM FPGA accelerator of an OCR appilcation, using CAPI/SNAP))
singagan/mlir-air
singagan/nn-Meter
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.
singagan/nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
singagan/once-for-all
[ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment
singagan/SneakySnake
SneakySnake:snake: is the first and the only pre-alignment filtering algorithm that works efficiently and fast on modern CPU, FPGA, and GPU architectures. It greatly (by more than two orders of magnitude) expedites sequence alignment calculation for both short and long reads. Described in the Bioinformatics (2020) by Alser et al. https://arxiv.org/abs/1910.09020.
singagan/Vitis_Accel_Examples
Vitis_Accel_Examples
singagan/stream_aie
Multi-core HW accelerator mapping optimization framework for layer-fused ML workloads.