Pinned Repositories
adversarial-examples-pytorch
Implementation of Papers on Adversarial Examples
AITemplate-study
AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
awesome-deep-learning-papers
The most cited deep learning papers
DeepLearning-500-questions
深度学习500问,以问答形式对常用的概率知识、线性代数、机器学习、深度学习、计算机视觉等热点问题进行阐述,以帮助自己及有需要的读者。 全书分为18个章节,50余万字。由于水平有限,书中不妥之处恳请广大读者批评指正。 未完待续............ 如有意合作,联系scutjy2015@163.com 版权所有,违权必究 Tan 2018.06
qiulinzhang.github.io
学习和记录
SPConv.pytorch
[ IJCAI-20 ] Split to Be Slim: An Overlooked Redundancy in Vanilla Convolution
TopPaper
Classic papers for beginners, and impact scope for authors.
underwater_guangxue
underwater_shengxue
qiulinzhang's Repositories
qiulinzhang/TopPaper
Classic papers for beginners, and impact scope for authors.
qiulinzhang/SPConv.pytorch
[ IJCAI-20 ] Split to Be Slim: An Overlooked Redundancy in Vanilla Convolution
qiulinzhang/underwater_guangxue
qiulinzhang/DeepLearning-500-questions
深度学习500问,以问答形式对常用的概率知识、线性代数、机器学习、深度学习、计算机视觉等热点问题进行阐述,以帮助自己及有需要的读者。 全书分为18个章节,50余万字。由于水平有限,书中不妥之处恳请广大读者批评指正。 未完待续............ 如有意合作,联系scutjy2015@163.com 版权所有,违权必究 Tan 2018.06
qiulinzhang/underwater_shengxue
qiulinzhang/AITemplate-study
AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
qiulinzhang/BRECQ_sudy
Study of Pytorch implementation of BRECQ, ICLR 2021
qiulinzhang/caffe_from_scratch
qiulinzhang/ConferencesStastics
The stastics information of top conference realted to information area including AI, ML, CV, etc
qiulinzhang/distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
qiulinzhang/electron-ssr-backup
electron-ssr原作者删除了这个伟大的项目,故备份了下来,不继续开发,且用且珍惜
qiulinzhang/HAWQ_study
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
qiulinzhang/imgaug
Image augmentation for machine learning experiments.
qiulinzhang/MicroNetChallenge
qiulinzhang/mmdetection
Open MMLab Detection Toolbox and Benchmark
qiulinzhang/oscillations-qat-study
qiulinzhang/overhaul-distillation
Official PyTorch implementation of "A Comprehensive Overhaul of Feature Distillation" (ICCV 2019)
qiulinzhang/pretrained-models.pytorch
Pretrained ConvNets for pytorch: NASNet, ResNeXt, ResNet, InceptionV4, InceptionResnetV2, Xception, DPN, etc.
qiulinzhang/pytorch-cifar100
Practice on cifar100(ResNet, DenseNet, VGG, GoogleNet, InceptionV3, InceptionV4, Inception-ResNetv2, Xception, Resnet In Resnet, ResNext,ShuffleNet, ShuffleNetv2, MobileNet, MobileNetv2, SqueezeNet, NasNet, Residual Attention Network, SENet)
qiulinzhang/pytorch-deeplab-xception
DeepLab v3+ model in PyTorch. Support different backbones.
qiulinzhang/pytorch-image-models
PyTorch image models, scripts, pretrained weights -- (SE)ResNet/ResNeXT, DPN, EfficientNet, MixNet, MobileNet-V3/V2, MNASNet, Single-Path NAS, FBNet, and more
qiulinzhang/pytorch-jingwei-master
qiulinzhang/pytorch-quantization-demo
A simple network quantization demo using pytorch from scratch.
qiulinzhang/pytorch_study
pytorch源码学习
qiulinzhang/QDrop_study
QDrop 代码学习
qiulinzhang/ResNeSt
ResNeSt: Split-Attention Network
qiulinzhang/start-ai-compiler
Start AI Compiler
qiulinzhang/stellargraph
StellarGraph - Machine Learning on Graphs
qiulinzhang/study_How_to_optimize_in_GPU
This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several basic kernel optimizations, including: elementwise, reduce, sgemv, sgemm, etc. The performance of these kernels is basically at or near the theoretical limit.
qiulinzhang/TensorRT_study
NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.