Pinned Repositories
tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
autoLiterature
autoLiterature是一个基于Dropbox和Python的自动文献管理器。
bert-paper
Research and Materials on Hardware implementation of BERT (Bidirectional Encoder Representations from Transformers) Model
clash-win-docs-new
example_student_code
A Sample Directory for student code
FPGA
帮助大家进行FPGA的入门,分享FPGA相关的优秀文章,优秀项目
micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
pytorch-tutorial
PyTorch深度学习快速入门教程(绝对通俗易懂!)
Yanhan-cmd's Repositories
Yanhan-cmd/autoLiterature
autoLiterature是一个基于Dropbox和Python的自动文献管理器。
Yanhan-cmd/bert-paper
Research and Materials on Hardware implementation of BERT (Bidirectional Encoder Representations from Transformers) Model
Yanhan-cmd/clash-win-docs-new
Yanhan-cmd/example_student_code
A Sample Directory for student code
Yanhan-cmd/FPGA
帮助大家进行FPGA的入门,分享FPGA相关的优秀文章,优秀项目
Yanhan-cmd/micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
Yanhan-cmd/pytorch-tutorial
PyTorch深度学习快速入门教程(绝对通俗易懂!)