moe
There are 181 repositories under moe topic.
hiyouga/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
sgl-project/sglang
SGLang is a fast serving framework for large language models and vision language models.
NVIDIA/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in performant way.
czy0729/Bangumi
:electron: An unofficial https://bgm.tv ui first app client for Android and iOS, built with React Native. 一个无广告、以爱好为驱动、不以盈利为目的、专门做 ACG 的类似豆瓣的追番记录,bgm.tv 第三方客户端。为移动端重新设计,内置大量加强的网页端难以实现的功能,且提供了相当的自定义选项。 目前已适配 iOS / Android。
flashinfer-ai/flashinfer
FlashInfer: Kernel Library for LLM Serving
zai-org/GLM-4.5
GLM-4.5: An open-source large language model designed for intelligent agents by Z.ai
PKU-YuanGroup/MoE-LLaVA
【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models
MoonshotAI/MoBA
MoBA: Mixture of Block Attention for Long-Context LLMs
davidmrau/mixture-of-experts
PyTorch Re-Implementation of "The Sparsely-Gated Mixture-of-Experts Layer" by Noam Shazeer et al. https://arxiv.org/abs/1701.06538
pjlab-sys4nlp/llama-moe
⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)
microsoft/Tutel
Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4
sail-sg/Adan
Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models
open-compass/MixtralKit
A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI
ScienceOne-AI/DeepSeek-671B-SFT-Guide
An open-source solution for full parameter fine-tuning of DeepSeek-V3/R1 671B, including complete code and scripts from training to inference, as well as some practical experiences and conclusions. (DeepSeek-V3/R1 满血版 671B 全参数微调的开源解决方案,包含从训练到推理的完整代码和脚本,以及实践中积累一些经验和结论。)
ymcui/Chinese-Mixtral
中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)
mindspore-courses/step_into_llm
MindSpore online courses: Step into LLM
kokororin/pixiv.moe
😘 A pinterest-style layout site, shows illusts on pixiv.net order by popularity.
weigao266/Awesome-Efficient-Arch
Speed Always Wins: A Survey on Efficient Architectures for Large Language Models
LISTEN-moe/android-app
Official LISTEN.moe Android app
SkyworkAI/MoH
MoH: Multi-Head Attention as Mixture-of-Head Attention
inferflow/inferflow
Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).
SkyworkAI/MoE-plus-plus
[ICLR 2025] MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts
libgdx/gdx-pay
A libGDX cross-platform API for InApp purchasing.
IBM/ModuleFormer
ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
inclusionAI/Ling
Ling is a MoE LLM provided and open-sourced by InclusionAI.
shufangxun/LLaVA-MoD
[ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation
cocowy1/SMoE-Stereo
[ICCV 2025 Highlight] 🌟🌟🌟 Learning Robust Stereo Matching in the Wild with Selective Mixture-of-Experts
LINs-lab/DynMoE
[ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models
Facico/GOAT-PEFT
[ICML2025] Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization Alignment
junchenzhi/Awesome-LLM-Ensemble
A curated list of Awesome-LLM-Ensemble papers for the survey "Harnessing Multiple Large Language Models: A Survey on LLM Ensemble"
kyegomez/SwitchTransformers
Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity"
shalldie/chuncai
A lovely Page Wizard, is responsible for selling moe.
kyegomez/MoE-Mamba
Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Zeta
LISTEN-moe/desktop-app
Official LISTEN.moe Desktop Client
Simplifine-gamedev/Simplifine
🚀 Easy, open-source LLM finetuning with one-line commands, seamless cloud integration, and popular optimization frameworks. ✨
marisukukise/japReader
japReader is an app for breaking down Japanese sentences and tracking vocabulary progress