Pinned Repositories
chessgpt
litgpt
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
pytorch-lightning
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
chessgpt
lightning
Deep learning framework to train, deploy, and ship AI products Lightning fast.
lit-gpt
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
mf-foom.github.io
place to host some simple experiments
wikivec2text
Simple embedding -> text model trained on a small subset of Wikipedia sentences.
bistro
Opinionated GPT implementation and finetuning harness.
MF-FOOM's Repositories
MF-FOOM/wikivec2text
Simple embedding -> text model trained on a small subset of Wikipedia sentences.
MF-FOOM/chessgpt
MF-FOOM/lightning
Deep learning framework to train, deploy, and ship AI products Lightning fast.
MF-FOOM/lit-gpt
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
MF-FOOM/mf-foom.github.io
place to host some simple experiments