lzd19981105's Stars
OpenBMB/MiniCPM
MiniCPM3-4B: An edge-side LLM that surpasses GPT-3.5-Turbo.
CompVis/stable-diffusion
A latent text-to-image diffusion model
baofff/U-ViT
A PyTorch implementation of the paper "All are Worth Words: A ViT Backbone for Diffusion Models".
VITA-Group/SFW-Once-for-All-Pruning
[ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, Tianlong Chen, Wuyang Chen, Dong Liu, Zhangyang Wang
meta-llama/llama3
The official Meta Llama 3 GitHub site
THU-MIG/yolov10
YOLOv10: Real-Time End-to-End Object Detection [NeurIPS 2024]
RAIVNLab/STR
Soft Threshold Weight Reparameterization for Learnable Sparsity
he-y/soft-filter-pruning
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
ZhuangzhuangWu/SkyNet
iSmart3 https://github.com/TomG008/SkyNet
TomG008/SkyNet
IDEA-Research/GroundingDINO
[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
nndeploy/nndeploy
nndeploy是一款模型端到端部署框架。以多端推理以及基于有向无环图模型部署为基础,致力为用户提供跨平台、简单易用、高性能的模型部署体验。
NVlabs/SMCP
IST-DASLab/ACDC
Code for reproducing "AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks" (NeurIPS 2021)
TanayNarshana/DFPC-Pruning
[ICLR 2023] PyTorch code for DFPC: Data flow driven pruning of coupled channels without data.
diaoenmao/Pruning-Deep-Neural-Networks-from-a-Sparsity-Perspective
[ICLR 2023] Pruning Deep Neural Networks from a Sparsity Perspective
he-y/Awesome-Pruning
A curated list of neural network pruning resources.
lmbxmu/HRank
Pytorch implementation of our paper accepted by CVPR 2020 (Oral) -- HRank: Filter Pruning using High-Rank Feature Map
imcyx/EverydayWechat
微信助手:1.每日定时给好友(女友)发送定制消息。2.机器人自动回复好友。3.群助手功能(例如:查询垃圾分类、天气、日历、电影实时票房、快递物流、PM2.5等)
xidongwu/AutoTrainOnce
airockchip/rknn-toolkit2
IST-DASLab/spdy
Code for ICML 2022 paper "SPDY: Accurate Pruning with Speedup Guarantees"
Yanqi-Chen/LATS
To appear in the 11th International Conference on Learning Representations (ICLR 2023).
wimh966/QDrop
The official PyTorch implementation of the ICLR2022 paper, QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization
hustvl/PD-Quant
[CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric
AMLab-Amsterdam/L0_regularization
Learning Sparse Neural Networks through L0 regularization
papers-submission/CalibTIP
Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Qualcomm-AI-research/pruning-vs-quantization
hpcaitech/Open-Sora
Open-Sora: Democratizing Efficient Video Production for All
mit-han-lab/smoothquant
[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models