StyxXuan's Stars
OpenInterpreter/open-interpreter
A natural language interface for computers
huggingface/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
ManimCommunity/manim
A community-maintained Python framework for creating mathematical animations.
QwenLM/Qwen
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
EleutherAI/lm-evaluation-harness
A framework for few-shot evaluation of language models.
predibase/lorax
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
ise-uiuc/magicoder
[ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct
InternLM/lagent
A lightweight framework for building LLM-based agents
thunlp/OpenNE
An Open-Source Package for Network Embedding (NE)
XueFuzhao/OpenMoE
A family of open-sourced Mixture-of-Experts (MoE) Large Language Models
evalplus/evalplus
Rigourous evaluation of LLM-synthesized code - NeurIPS 2023
davidmrau/mixture-of-experts
PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538
XueFuzhao/awesome-mixture-of-experts
A collection of AWESOME things about mixture-of-experts
google/dreambooth
maszhongming/Multi-LoRA-Composition
Repository for the Paper "Multi-LoRA Composition for Image Generation"
sabetAI/BLoRA
batched loras
rui-ye/OpenFedLLM
TUDB-Labs/mLoRA
An Efficient "Factory" to Build Multiple LoRA Adapters
JayZhang42/FederatedGPT-Shepherd
Shepherd: A foundational framework enabling federated instruction tuning for large language models
Ablustrund/LoRAMoE
LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment
aHuiWang/plot_demo
论文里可以用到的实验图示例
google-research/url-nlp
EricLBuehler/xlora
X-LoRA: Mixture of LoRA Experts
stylus-diffusion/stylus
liuqidong07/MOELoRA-peft
[SIGIR'24] The official implementation code of MOELoRA.
r-three/phatgoose
Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"
InfiAgent/InfiAgent
tanganke/fusion_bench
FusionBench: A Comprehensive Benchmark of Deep Model Fusion
yushuiwx/MoLE
JetRunner/TuPaTE
Code for EMNLP 2022 paper "Efficiently Tuned Parameters are Task Embeddings"