blublubiu's Stars
alexforencich/verilog-axi
Verilog AXI components for FPGA implementation
thu-cs-lab/tanlabs
Tsinghua Advanced Networking Labs on FPGA
ZYangChen/MoCha-Stereo
[CVPR2024] The official implementation of "MoCha-Stereo: Motif Channel Attention Network for Stereo Matching”.
bxinquan/zynq_cam_isp_demo
基于verilog实现了ISP图像处理IP
H874589148/Basic-Experiment-of-Microelectronics
大四上学期2020年秋季学期微电子专业基础实验实验课文件
openasic-org/xk265
xk265:HEVC/H.265 Video Encoder IP Core (RTL)
bubbliiiing/yolo3-pytorch
这是一个yolo3-pytorch的源码,可以用于训练自己的模型。
sumanth-kalluri/cnn_hardware_acclerator_for_fpga
This is a fully parameterized verilog implementation of computation kernels for accleration of the Inference of Convolutional Neural Networks on FPGAs
arasi15/CNN-Accelerator-Implementation-based-on-Eyerissv2
jha-lab/codebench
[TECS'23] A project on the co-design of Accelerators and CNNs.
GuoningHuang/FPGA-CNN-accelerator-based-on-systolic-array
2023集创赛国二,紫光同创杯。基于脉动阵列写的一个简单的卷积层加速器,支持yolov3-tiny的第一层卷积层计算,可根据FPGA端DSP资源灵活调整脉动阵列的结构以实现不同的计算效率。
lirui-shanghaitech/CNN-Accelerator-VLSI
Convolutional accelerator kernel, target ASIC & FPGA
VITA-Group/M3ViT
[NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue Liang*, Zhiwen Fan*, Rishov Sarkar, Ziyu Jiang, Tianlong Chen, Kai Zou, Yu Cheng, Cong Hao, Zhangyang Wang
GATECH-EIC/ViTCoD
[HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
NVIDIA/FasterTransformer
Transformer related optimization, including BERT, GPT
jha-lab/transcode
[TCAD'23] TransCODE: Co-design of Transformers and Accelerators for Efficient Training and Inference
SamsungLabs/Butterfly_Acc
The codes and artifacts associated with our MICRO'22 paper titled: "Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and Algorithm Co-design"
kerryliukk/NTHU-ICLAB
清華大學 | 積體電路設計實驗 (IC LAB) | 110上