Pinned Repositories
advertorch
A Toolbox for Adversarial Robustness Research
cleverhans
A library for benchmarking vulnerability to adversarial examples
distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://nervanasystems.github.io/distiller
Itti
saliency; object detection;A Model of saliency Based Visual Attention for Rapid Scene Analysis
Keras-Project-Template
A project template to simplify building and training deep learning models using Keras.
micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
pytorch-adversarial_box
PyTorch library for adversarial attack and training
pytorch-template
wide-resnet.pytorch
Best CIFAR-10, CIFAR-100 results with wide-residual networks using PyTorch
dawkinszheng2's Repositories
dawkinszheng2/advertorch
A Toolbox for Adversarial Robustness Research
dawkinszheng2/cleverhans
A library for benchmarking vulnerability to adversarial examples
dawkinszheng2/distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://nervanasystems.github.io/distiller
dawkinszheng2/Itti
saliency; object detection;A Model of saliency Based Visual Attention for Rapid Scene Analysis
dawkinszheng2/Keras-Project-Template
A project template to simplify building and training deep learning models using Keras.
dawkinszheng2/micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
dawkinszheng2/pytorch-adversarial_box
PyTorch library for adversarial attack and training
dawkinszheng2/pytorch-template
dawkinszheng2/wide-resnet.pytorch
Best CIFAR-10, CIFAR-100 results with wide-residual networks using PyTorch