minhopark-neubla's Stars
nomic-ai/gpt4all
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
twitter/the-algorithm
Source code for Twitter's Recommendation Algorithm
karpathy/nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
LAION-AI/Open-Assistant
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
huggingface/diffusers
๐ค Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
terrastruct/d2
D2 is a modern diagram scripting language that turns text to diagrams.
huggingface/peft
๐ค PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
triton-lang/triton
Development repository for the Triton language and compiler
databrickslabs/dolly
Databricksโ Dolly, a large language model trained on the Databricks Machine Learning Platform
microsoft/LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
FMInference/FlexLLMGen
Running large language models on a single GPU for throughput-oriented scenarios.
facebookresearch/xformers
Hackable and optimized Transformers building blocks, supporting a composable construction.
facebookresearch/ImageBind
ImageBind One Embedding Space to Bind Them All
bigcode-project/starcoder
Home of StarCoder: fine-tuning & inference!
zilliztech/GPTCache
Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
EleutherAI/lm-evaluation-harness
A framework for few-shot evaluation of language models.
Lightning-AI/lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
mosaicml/composer
Supercharge Your Model Training
mosaicml/llm-foundry
LLM training code for Databricks foundation models
leehosung/awesome-devteam
์ข์ ๊ฐ๋ฐํ์ ๋ง๋๋๋ฐ ๋์์ด ๋๋ ์๋ฃ
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
IST-DASLab/gptq
Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
kuleshov/minillm
MiniLLM is a minimal system for running modern LLMs on consumer-grade GPUs
IST-DASLab/sparsegpt
Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".
huggingface/nn_pruning
Prune a model while finetuning or training.
LudwigStumpp/llm-leaderboard
A joint community effort to create one central leaderboard for LLMs.
krishnap25/mauve
Package to compute Mauve, a similarity score between neural text and human text. Install with `pip install mauve-text`.
terrastruct/d2-vscode
VSCode extension for D2 files.
dilekh/Talk-at-ICLR-2023