zihaozhang9's Stars
madmaze/pytesseract
A Python wrapper for Google Tesseract
spillai/numpy-opencv-converter
OpenCV <=> NumPy Converter using Boost::Python
BIGBALLON/PyTorch-CPP
PyTorch C++ inference with LibTorch
xylcbd/MiniDL
C++从零开始深度学习
hfq0219/mnist
huawei-noah/AdderNet
Code for paper " AdderNet: Do We Really Need Multiplications in Deep Learning?"
RubanSeven/CRAFT_keras
Keras implementation of Character Region Awareness for Text Detection (CRAFT)
eastmountyxz/ImageProcessing-Python
该资源为作者在CSDN的撰写Python图像处理文章的支撑,主要是Python实现图像处理、图像识别、图像分类等算法代码实现,希望该资源对您有所帮助,一起加油。
zk00006/OpenTLD
OpenTLD is an open source library for real-time 2D tracking of a single object in video. This repository is no longer under development. For latest version see: http://www.tldvision.com/tld2.html
liu-jianhao/Cpp-Design-Patterns
C++设计模式
rhyspang/CPP-Design-Patterns
C++设计模式
Ewenwan/MVision
机器人视觉 移动机器人 VS-SLAM ORB-SLAM2 深度学习目标检测 yolov3 行为检测 opencv PCL 机器学习 无人驾驶
TimSC/PyFeatureTrack
Feature point tracking in Python 2 or 3, using the KLT method.
zhoubolei/collectiveness
The source codes in the CVPR2013 Paper: Measuring Crowd Collectiveness
zhoubolei/GKLT
The binary code of generalized KLT tracker
yjadaa/KLT
Kanade–Lucas–Tomasi (KLT) feature tracker (MATLAB)
MegEngine/MegEngine
MegEngine 是一个快速、可拓展、易于使用且支持自动求导的深度学习框架
HobbitLong/RepDistiller
[ICLR 2020] Contrastive Representation Distillation (CRD), and benchmark of recent knowledge distillation methods
s7ev3n/model-compression
Jonezhen/CSBook
lhyfst/knowledge-distillation-papers
knowledge distillation papers
siyuanc2/kiout
Python implementation of the Kalman-IOU Tracker
bochinski/iou-tracker
Python implementation of the IOU Tracker
amoudgl/pygoturn
PyTorch implementation of GOTURN object tracker: Learning to Track at 100 FPS with Deep Regression Networks (ECCV 2016)
foolwood/DCFNet_pytorch
DCFNet: Discriminant Correlation Filters Network for Visual Tracking
666DZY666/micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
xiaomi-automl/MoGA
MoGA: Searching Beyond MobileNetV3
rwightman/gen-efficientnet-pytorch
Pretrained EfficientNet, EfficientNet-Lite, MixNet, MobileNetV3 / V2, MNASNet A1 and B1, FBNet, Single-Path NAS
tensorflow/models
Models and examples built with TensorFlow
tanluren/yolov3-channel-and-layer-pruning
yolov3 yolov4 channel and layer pruning, Knowledge Distillation 层剪枝,通道剪枝,知识蒸馏