Pinned Repositories
xtuner
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
accelerate
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
AlphaNet
AlphaNet Improved Training of Supernet with Alpha-Divergence
APoT_Quantization
PyTorch implementation for the APoT quantization (ICLR 2020)
AttentiveNAS
code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"
bib-shorty
Trim your bibliography to the bare minimum
CBNetV2
CMUA-Watermark
The official code for CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes (AAAI2022)
xtuner-template
DynamicDet
[CVPR 2023] DynamicDet: A Unified Dynamic Architecture for Object Detection
LZHgrla's Repositories
LZHgrla/xtuner-template
LZHgrla/accelerate
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
LZHgrla/AlphaNet
AlphaNet Improved Training of Supernet with Alpha-Divergence
LZHgrla/AttentiveNAS
code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"
LZHgrla/CBNetV2
LZHgrla/CMUA-Watermark
The official code for CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes (AAAI2022)
LZHgrla/examples
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
LZHgrla/FQ-ViT
[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
LZHgrla/datasets
🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
LZHgrla/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
LZHgrla/FOX-NAS
FOX-NAS: Fast, On-device and Explainable NeuralArchitecture Search
LZHgrla/InternLM
InternLM has open-sourced 7 and 20 billion parameter base models and chat models tailored for practical scenarios and the training system.
LZHgrla/lagent
A lightweight framework for building LLM-based agents
LZHgrla/LLaMA-Factory
Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)
LZHgrla/LLaVA
[NeurIPS 2023 Oral] Visual Instruction Tuning: LLaVA (Large Language-and-Vision Assistant) built towards multimodal GPT-4 level capabilities.
LZHgrla/LLSQ
LZHgrla/lmdeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLM
LZHgrla/LMOps
General technology for enabling AI capabilities w/ LLMs and MLLMs
LZHgrla/LZHgrla
LZHgrla/minisora
The Mini Sora project aims to explore the implementation path and future development direction of Sora.
LZHgrla/mmengine
OpenMMLab Foundational Library for Training Deep Learning Models
LZHgrla/model-quantization
Collections of model quantization algorithms
LZHgrla/opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (LLaMA, ChatGLM2, ChatGPT, Claude, etc) over 50+ datasets.
LZHgrla/OQA
LZHgrla/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
LZHgrla/VLMEvalKit
An open-source evaluation toolkit of large vision-language models (LVLMs)
LZHgrla/WeChatMsg
提取微信聊天记录,将其导出成HTML、Word、CSV文档永久保存,对聊天记录进行分析生成年度聊天报告
LZHgrla/xtuner
XTuner is an efficient, flexible and full-featured toolkit for fine-tuning large models
LZHgrla/YOLOX
MegEngine implementation of YOLOX
LZHgrla/ZeroQ
[CVPR'20] ZeroQ: A Novel Zero Shot Quantization Framework