9yc's Stars
azl397985856/leetcode
LeetCode Solutions: A Record of My Problem Solving Journey.( leetcode题解,记录自己的leetcode解题之路。)
youngyangyang04/leetcode-master
《代码随想录》LeetCode 刷题攻略:200道经典题目刷题顺序,共60w字的详细图解,视频难点剖析,50余张思维导图,支持C++,Java,Python,Go,JavaScript等多语言版本,从此算法学习不再迷茫!🔥🔥 来看看,你会发现相见恨晚!🚀
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
EleutherAI/lm-evaluation-harness
A framework for few-shot evaluation of language models.
MegEngine/MegEngine
MegEngine 是一个快速、可拓展、易于使用且支持自动求导的深度学习框架
stanford-crfm/helm
Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110). This framework is also used to evaluate text-to-image models in HEIM (https://arxiv.org/abs/2311.04287) and vision-language models in VHELM (https://arxiv.org/abs/2410.07112).
flexflow/FlexFlow
FlexFlow Serve: Low-Latency, High-Performance LLM Serving
kuleshov/minillm
MiniLLM is a minimal system for running modern LLMs on consumer-grade GPUs
Lyken17/pytorch-memonger
Sublinear memory optimization for deep learning. https://arxiv.org/abs/1604.06174
mallorbc/Finetune_LLMs
Repo for fine-tuning Casual LLMs
mit-han-lab/tiny-training
On-Device Training Under 256KB Memory [NeurIPS'22]
amirgholami/ai_and_memory_wall
AI and Memory Wall
yxli2123/LoftQ
ShishirPatil/poet
ML model training for edge devices
uwsampl/dtr-prototype
Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616
ssbuild/llm_finetuning
Large language Model fintuning bloom , opt , gpt, gpt2 ,llama,llama-2,cpmant and so on
pittisl/Generative-AI-Tutorial
A subjective learning guide for generative AI research
longtanle/awesome-federated-LLM-learning
This is a collection of research papers for Federated Learning for Large Language Models (FedLLM). And the repository will be continuously updated to track the frontier of FedLLM.
bibikar/feddst
Federated Dynamic Sparse Training
BaohaoLiao/mefts
[NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
theyoungkwon/TinyTrain
The official implementation of TinyTrain [ICML '24]
taehokim20/LLMem
LLMem: GPU Memory Estimation for Fine-Tuning Pre-Trained LLMs
pittisl/GreenTrainer
Code for paper "Towards Green AI in Fine-tuning Large Language Models via Adaptive Backpropagation" (ICLR'24)
TonyTangYu/pytorch
DELTA-pytorch:DELTA: Dynamically Optimizing GPU Memory beyond Tensor Recomputation
zhenqincn/FedAPEN
This repository contains the official implementation of the paper entitled with "FedAPEN: Personalized Cross-silo Federated Learning with Adaptability to Statistical Heterogeneity".
ChanYunHin/InCo-Aggregation
The official source codes of ICLR2024 paper, "Internal Cross-layer Gradients for Extending Homogeneity to Heterogeneity in Federated Learning".
tianlwang/eval_gsm8k
uwsampl/dtr
Dynamic Tensor Rematerialization
pittisl/FL-with-intertwined-heterogeneity