Doraemonzm's Stars
pprp/Awesome-LLM-Prune
Awesome list for LLM pruning.
pprp/Awesome-LLM-Quantization
Awesome list for LLM quantization
runwayml/stable-diffusion
Latent Text-to-Image Diffusion
CompVis/stable-diffusion
A latent text-to-image diffusion model
dbolya/tomesd
Speed up Stable Diffusion with this one simple trick!
lkhl/tiny-transformers
[ECCV 2022] Implementation of the paper "Locality Guidance for Improving Vision Transformers on Tiny Datasets"
facebookresearch/mvit
Code Release for MViTv2 on Image Recognition.
LinXueyuanStdio/chatgpt-review-rebuttal-extension
ChatGPT - Review & Rebuttal: A browser extension for generating reviews and rebuttals, powered by ChatGPT. 利用 ChatGPT 生成审稿意见和回复的浏览器插件
microsoft/archai
Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.
facebookresearch/AlphaNet
AlphaNet Improved Training of Supernet with Alpha-Divergence
Sense-X/UniFormer
[ICLR2022] official implementation of UniFormer
MingSun-Tse/Efficient-Deep-Learning
Collection of recent methods on (deep) neural network compression and acceleration.
raoyongming/HorNet
[NeurIPS 2022] HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions
mowangphy88/TFDMNet
Learning Convolutional Neural Networks in the Frequency Domain
HikariTJU/LD
Localization Distillation for Object Detection (CVPR 2022, TPAMI 2023)
mit-han-lab/tinyengine
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory
frgfm/torch-cam
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)
HRNet/HRNet-Image-Classification
Train the HRNet model on ImageNet
HRNet/HRNet-Semantic-Segmentation
The OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919
developer0hye/SKNet-PyTorch
Nearly Perfect & Easily Understandable PyTorch Implementation of SKNet
facebookresearch/moco-v3
PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057
epfml/attention-cnn
Source code for "On the Relationship between Self-Attention and Convolutional Layers"
VITA-Group/SViTE
[NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang, Zhangyang Wang
google-research/rigl
End-to-end training of sparse deep neural networks with little-to-no performance loss.
Sara-Ahmed/SiT
Self-supervised vIsion Transformer (SiT)
czczup/ViT-Adapter
[ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions
DirtyHarryLYL/Transformer-in-Vision
Recent Transformer-based CV and related works.
ziplab/LITv2
[NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "Fast Vision Transformers with HiLo Attention"
mit-han-lab/tinyml
VITA-Group/UVC
[ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Liu, Zhangyang Wang