kyh-hook's Stars
thousrm/universal_NPU-CNN_accelerator
hardware design of universal NPU(CNN accelerator) for various convolution neural network
SkyworksSolutionsInc/fplib
Fixed point math library for SystemVerilog
tharunchitipolu/Dadda-Multiplier-using-CSA
Dadda multiplier(8*8, 16*16, 32*32) in Verilog HDL.
abdelazeem201/Systolic-array-implementation-in-RTL-for-TPU
IC implementation of Systolic Array for TPU
suisuisi/FPGAandCNN
基于FPGA的数字识别-实时视频处理的定点卷积神经网络实现
QShen3/CNN-FPGA
使用Verilog实现的CNN模块,可以方便的在FPGA项目中使用
alexforencich/verilog-i2c
Verilog I2C interface for FPGA implementation
Basantloay/Softmax_CNN
This repository contains full code of Softmax Layer in Verilog
fumimaker/Zybo_OV7670
OV7670とZyboZ7-20を使って映像を取り込み,VGA出力するリポジトリです.
westonb/OV7670-Verilog
Verilog modules required to get the OV7670 camera working
ZFTurbo/Verilog-Generator-of-Neural-Net-Digit-Detector-for-FPGA
Verilog Generator of Neural Net Digit Detector for FPGA
omarelhedaby/CNN-FPGA
Implementation of CNN on ZYNQ FPGA to classify handwritten numbers using MNIST database
SIAEm41/LeNet-5_FPGA
haoheliu/Key-word-spotting-DNN-GRU-DSCNN
key word spotting GRU/DNN/DSCNN
boaaaang/CNN-Implementation-in-Verilog
Convolutional Neural Network RTL-level Design
freecores/verilog_fixed_point_math_library
Fixed Point Math Library for Verilog
benreynwar/fft-dit-fpga
Verilog module for calculation of FFT.
itayhubara/BinaryNet
Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
FLHonker/ZAQ-code
CVPR 2021 : Zero-shot Adversarial Quantization (ZAQ)
park-onezero/streamlink-plugin-chzzk
streamlink plugin for CHZZK(치지직)
Efficient-ML/Awesome-Model-Quantization
A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.
666DZY666/micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
CMU-SAFARI/ramulator2
Ramulator 2.0 is a modern, modular, extensible, and fast cycle-accurate DRAM simulator. It provides support for agile implementation and evaluation of new memory system designs (e.g., new DRAM standards, emerging RowHammer mitigation techniques). Described in our paper https://people.inf.ethz.ch/omutlu/pub/Ramulator2_arxiv23.pdf
TropComplique/trained-ternary-quantization
Reducing the size of convolutional neural networks
lirui-shanghaitech/CNN-Accelerator-VLSI
Convolutional accelerator kernel, target ASIC & FPGA
alan4186/Hardware-CNN
A convolutional neural network implemented in hardware (verilog)
taoyilee/clacc
Deep Learning Accelerator (Convolution Neural Networks)
8krisv/CNN-ACCELERATOR
Hardware accelerator for convolutional neural networks
itayhubara/BinaryNet.pytorch
Binarized Neural Network (BNN) for pytorch
3b1b/manim
Animation engine for explanatory math videos