Pinned Repositories
Awesome-Pruning
A curated list of neural network pruning resources.
control_simulate
An open autonomous driving platform
haq
[CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision
model-compression
model compression based on pytorch (1、quantization: 16/8/4/2 bits(dorefa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、ternary/binary value(twn/bnn/xnor-net);2、 pruning: normal、regular and group convolutional channel pruning;3、 group convolution structure;4、batch-normalization folding for quantization)
monodepth2
[ICCV 2019] Monocular depth estimation from a single image
open
sally
StatAssist-GradBoost
A Study on Optimal INT8 Quantization-aware Training from Scratch
YOLO-Multi-Backbones-Attention
Model Compression—YOLOv3 with multi lightweight backbones(ShuffleNetV2 HuaWei GhostNet), attention, prune and quantization
yolov5-deepsort
sally1913105's Repositories
sally1913105/Awesome-Pruning
A curated list of neural network pruning resources.
sally1913105/control_simulate
An open autonomous driving platform
sally1913105/haq
[CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision
sally1913105/model-compression
model compression based on pytorch (1、quantization: 16/8/4/2 bits(dorefa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、ternary/binary value(twn/bnn/xnor-net);2、 pruning: normal、regular and group convolutional channel pruning;3、 group convolution structure;4、batch-normalization folding for quantization)
sally1913105/monodepth2
[ICCV 2019] Monocular depth estimation from a single image
sally1913105/open
sally1913105/sally
sally1913105/StatAssist-GradBoost
A Study on Optimal INT8 Quantization-aware Training from Scratch
sally1913105/YOLO-Multi-Backbones-Attention
Model Compression—YOLOv3 with multi lightweight backbones(ShuffleNetV2 HuaWei GhostNet), attention, prune and quantization
sally1913105/yolov5-deepsort