Pinned Repositories
awesome-lm-system
Summary of system papers/frameworks/codes/tools on training or serving large model
Dipoorlet
Offline Quantization Tools for Deploy.
LightCompress
A powerful toolkit for compressing large models including LLM, VLM, and video generation models.
LightLLM
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
LightX2V
Light Video Generation Inference Framework
MQBench
Model Quantization Benchmark
Qwen-Image-Lightning
Qwen-Image-Lightning: Speed up Qwen-Image model with distillation
TFMQ-DM
[CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models".
United-Perception
United Perception
Wan2.2-Lightning
Wan2.2-Lightning: Speed up wan2.2 model with distillation
ModelTC's Repositories
ModelTC/United-Perception
United Perception
ModelTC/mqbench-paper
ModelTC/rank_dataset
PyTorch Dataset Rank Dataset
ModelTC/NNLQP
ModelTC/LPCV2021_Winner_Solution
ModelTC/Prototype
ModelTC/AAAI2023_EAMPD
AAAI2023 Efficient and Accurate Models towards Practical Deep Learning Baseline
ModelTC/Imagenet-S
Robustness for real-world system noise
ModelTC/pyrotom
Python Code Hotfix and Refactor on the fly
ModelTC/tvm-vit
ModelTC/UNRT
UNiversal RunTime