Fairylang's Stars
PaddlePaddle/FastDeploy
⚡️An Easy-to-use and Fast Deep Learning Model Deployment Toolkit for ☁️Cloud 📱Mobile and 📹Edge. Including Image, Video, Text and Audio 20+ main stream scenarios and 150+ SOTA models with end-to-end optimization, multi-platform and multi-framework support.
DefTruth/lite.ai.toolkit
🛠 A lite C++ toolkit of awesome AI models, support ONNXRuntime, MNN, TNN, NCNN and TensorRT.
tensorflow/model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
quic/aimet
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
mlc-ai/mlc-zh
zysxmu/IntraQ
Pytorch implementation of our paper accepted by CVPR 2022 -- IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization
tding1/CDFI
[CVPR 2021] CDFI: Compression-Driven Network Design for Frame Interpolation
52CV/CVPR-2021-Papers
JDAI-CV/FaceX-Zoo
A PyTorch Toolbox for Face Recognition
zma-c-137/VarGFaceNet
Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB
💎1MB lightweight face detection model (1MB轻量级人脸检测模型)
MirrorYuChen/mnn_example
alibaba MNN, mobilenet classifier, centerface detecter, ultraface detecter, pfld landmarker and zqlandmarker, mobilefacenet
alibaba/MNNKit
MNNKit is a collection of AI solutions for mobile developers, powered by MNN engine.
alibaba/MNN
MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
lmbxmu/HRank
Pytorch implementation of our paper accepted by CVPR 2020 (Oral) -- HRank: Filter Pruning using High-Rank Feature Map
IntelLabs/distiller
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Tencent/PocketFlow
An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
666DZY666/micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
AlexanderParkin/CASIA-SURF_CeFA
Face Anti-spoofing Attack Detection Challenge@CVPR2020
amusi/CVPR2024-Papers-with-Code
CVPR 2024 论文和开源项目合集
qyxqyx/AIM_FAS
Implementation of the paper "Learning Meta Model for Zero- and Few-shot Face Anti-spoofing"
SeuTao/FaceBagNet
FaceBagNet - Patch-based Methods for Multi-modal Face Anti-spoofing (FAS)
experiencor/keras-yolo3
Training and Detecting Objects with YOLO3
eriklindernoren/PyTorch-YOLOv3
Minimal PyTorch implementation of YOLOv3
poloclub/cnn-explainer
Learning Convolutional Neural Networks with Interactive Visualization.
DayBreak-u/Thundernet_Pytorch
Implementation Thundernet
zylo117/Yet-Another-EfficientDet-Pytorch
The pytorch re-implement of the official efficientdet with SOTA performance in real time and pretrained weights.
hoya012/deep_learning_object_detection
A paper list of object detection using deep learning.
google/automl
Google Brain AutoML
amusi/awesome-object-detection
Awesome Object Detection based on handong1587 github: https://handong1587.github.io/deep_learning/2015/10/09/object-detection.html