/awesome-production-llm

A curated list of awesome open-source libraries for production LLM

MIT LicenseMIT

Awesome-Production-LLM

This repository contains a curated list of awesome open-source libraries for production large language models.

Newly updated

Quick links

📚LLM Data Preprocessing 🤖LLM Training / Finetuning 📊LLM Evaluation / Benchmark
🚀LLM Serving / Inference 🛠️LLM Application / RAG 🧐LLM Testing / Monitoring
🛡️LLM Guardrails / Security 🍳LLM Cookbook / Examples 🎓LLM Courses / Education

LLM Data Preprocessing

  • data-juicer (ModelScope) A one-stop data processing system to make data higher-quality, juicier, and more digestible for (multimodal) LLMs!
  • datatrove (HuggingFace) Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
  • dolma (AllenAI) Data and tools for generating and inspecting OLMo pre-training data.
  • dataverse (Upstage) The Universe of Data. All about data, data science, and data engineering
  • NeMo-Curator (NVIDIA) Scalable toolkit for data curation
  • dps (EleutherAI) Data processing system for polyglot

LLM Training / Finetuning

  • nanoGPT (karpathy) The simplest, fastest repository for training/finetuning medium-sized GPTs.
  • LLaMA-Factory A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
  • peft (HuggingFace) PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
  • llama-recipes (Meta) Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs.
  • Megatron-LM (NVIDIA) Ongoing research training transformer models at scale
  • litgpt (LightningAI) 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
  • trl (HuggingFace) Train transformer language models with reinforcement learning.
  • LMFlow (OptimalScale) An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
  • gpt-neox (EleutherAI) An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
  • torchtune (PyTorch) A Native-PyTorch Library for LLM Fine-tuning
  • xtuner (InternLM) An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
  • nanotron (HuggingFace) Minimalistic large language model 3D-parallelism training

LLM Evaluation / Benchmark

  • evals (OpenAI) Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
  • lm-evaluation-harness (EleutherAI) A framework for few-shot evaluation of language models.
  • opencompass (OpenCompass) - OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
  • deepeval (ConfidentAI) The LLM Evaluation Framework
  • lighteval (HuggingFace) LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron.
  • evalverse (Upstage) The Universe of Evaluation. All about the evaluation for LLMs.

LLM Serving / Inference

  • ollama (Ollama) Get up and running with Llama 3.1, Mistral, Gemma 2, and other large language models.
  • gpt4all (NomicAI) GPT4All: Chat with Local LLMs on Any Device
  • llama.cpp LLM inference in C/C++
  • FastChat (LMSYS) An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
  • vllm A high-throughput and memory-efficient inference and serving engine for LLMs
  • guidance (guidance-ai) A guidance language for controlling large language models.
  • LiteLLM (BerriAI) Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate, Groq (100+ LLMs)
  • OpenLLM (BentoML) Run any open-source LLMs, such as Llama 3.1, Gemma, as OpenAI compatible API endpoint in the cloud.
  • text-generation-inference (HuggingFace) Large Language Model Text Generation Inference
  • TensorRT-LLM (NVIDIA) TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs.
  • LMDeploy (InternLM) LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
  • RouteLLM (LMSYS) A framework for serving and evaluating LLM routers - save LLM costs without compromising quality!

LLM Application / RAG

  • AutoGPT AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
  • langchain (LangChain) Build context-aware reasoning applications
  • MetaGPT The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming
  • dify (LangGenius) Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
  • llama_index (LlamaIndex) LlamaIndex is a data framework for your LLM applications
  • Flowise (FlowiseAI) Drag & drop UI to build your customized LLM flow
  • mem0 (Mem0) The memory layer for Personalized AI
  • haystack (Deepset) LLM orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data.
  • GraphRAG (Microsoft) A modular graph-based Retrieval-Augmented Generation (RAG) system
  • RAGFlow (InfiniFlow) RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.
  • llmware (LLMware.ai) Unified framework for building enterprise RAG pipelines with small, specialized models
  • llama-agentic-system (Meta) Agentic components of the Llama Stack APIs

LLM Testing / Monitoring

  • promptflow (Microsoft) Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
  • langfuse (Langfuse) Open source LLM engineering platform: Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more.
  • evidently (EvidentlyAI) Evidently is ​​an open-source ML and LLM observability framework. Evaluate, test, and monitor any AI-powered system or data pipeline. From tabular data to Gen AI. 100+ metrics.
  • giskard (Giskard) Open-Source Evaluation & Testing for LLMs and ML models
  • promptfoo (promptfoo) Test your prompts, agents, and RAGs. Redteaming, pentesting, vulnerability scanning for LLMs. Improve your app's quality and catch problems. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and CI/CD integration.
  • phoenix (ArizeAI) AI Observability & Evaluation
  • agenta (Agenta.ai) The all-in-one LLM developer platform: prompt management, evaluation, human feedback, and deployment all in one place.

LLM Guardrails / Security

  • NeMo-Guardrails (NVIDIA) NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
  • guardrails (GuardrailsAI) Adding guardrails to large language models.
  • PurpleLlama (Meta) Set of tools to assess and improve LLM security.
  • llm-guard (ProtectAI) The Security Toolkit for LLM Interactions

LLM Cookbook / Examples

  • openai-cookbook (OpenAI) Examples and guides for using the OpenAI API
  • gemini-cookbook (Google) Examples and guides for using the Gemini API.
  • anthropic-cookbook (Anthropic) A collection of notebooks/recipes showcasing some fun and effective ways of using Claude.
  • amazon-bedrock-workshop (AWS) This is a workshop designed for Amazon Bedrock a foundational model service.
  • Phi-3CookBook (Microsoft) This is a Phi-3 book for getting started with Phi-3. Phi-3, a family of open AI models developed by Microsoft.
  • mistral-cookbook (Mistral) The Mistral Cookbook features examples contributed by Mistralers and our community, as well as our partners.
  • amazon-bedrock-samples (AWS) This repository contains examples for customers to get started using the Amazon Bedrock Service. This contains examples for all available foundational models
  • cohere-notebooks (Cohere) Code examples and jupyter notebooks for the Cohere Platform
  • gemma-cookbook (Google) A collection of guides and examples for the Gemma open models from Google.
  • upstage-cookbook (Upstage) Upstage api examples and guides

LLM Courses / Education

  • generative-ai-for-beginners (Microsoft) 18 Lessons, Get Started Building with Generative AI
  • llm-course Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
  • LLMs-from-scratch Implementing a ChatGPT-like LLM in PyTorch from scratch, step by step
  • hands-on-llms Learn about LLMs, LLMOps, and vector DBs for free by designing, training, and deploying a real-time financial advisor LLM system ~ source code + video & reading materials
  • llm-zoomcamp (DataTalksClub) LLM Zoomcamp - a free online course about building a Q&A system
  • llm-twin-course (DecodingML) Learn for free how to build an end-to-end production-ready LLM & RAG system using LLMOps best practices: ~ source code + 12 hands-on lessons

Acknowledgements

This project is inspired by Awesome Production Machine Learning.