matiasED's Stars
NVIDIA/NeMo-Guardrails
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
huggingface/knockknock
🚪✊Knock Knock: Get notified when your training ends with only two additional lines of code
danielgross/LlamaAcademy
A school for camelids
thunlp/PTR
Prompt Tuning with Rules
yeagerai/yeagerai-agent
lm-sys/FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Lightning-AI/lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
mobarski/alpaca-libre
Reimplementation of the task generation part from the Alpaca paper
langflow-ai/langflow
Langflow is a low-code app builder for RAG and multi-agent AI applications. It’s Python-based and agnostic to any model, API, or database.
huggingface/trl
Train transformer language models with reinforcement learning.
bitsandbytes-foundation/bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
UKPLab/sentence-transformers
State-of-the-Art Text Embeddings
minimaxir/aitextgen
A robust Python tool for text-based AI training and generation using GPT-2.
Shivanandroy/simpleT5
simpleT5 is built on top of PyTorch-lightning⚡️ and Transformers🤗 that lets you quickly train your T5 models.
LAION-AI/Open-Assistant
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
thunlp/OpenPrompt
An Open-Source Framework for Prompt-Learning.
karpathy/nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
bigscience-workshop/promptsource
Toolkit for creating, sharing and using natural language prompts.
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
promptslab/Promptify
Prompt Engineering | Prompt Versioning | Use GPT or other prompt based models to get structured output. Join our discord for Prompt-Engineering, LLMs and other latest research
hwchase17/notion-qa
hwchase17/langchain-hub
BlackHC/toma
Helps you write algorithms in PyTorch that adapt to the available (CUDA) memory
amaiya/ktrain
ktrain is a Python library that makes deep learning and AI more accessible and easier to apply
dair-ai/ML-Papers-Explained
Explanation to key concepts in ML
IgorSusmelj/pytorch-styleguide
An unofficial styleguide and best practices summary for PyTorch
facebookresearch/balance
The balance python package offers a simple workflow and methods for dealing with biased data samples when looking to infer from them to some target population of interest.
bigscience-workshop/petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
karpathy/minGPT
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
allenai/RL4LMs
A modular RL library to fine-tune language models to human preferences