Pinned Repositories
accelerate
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
alphafold3-pytorch
Implementation of Alphafold 3 in Pytorch
backgammon
Command-line backgammon with a bot trained using reinforcement learning
ComfyUI
The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
core
:house_with_garden: Open source home automation that puts local control and privacy first.
cramming
Cramming the training of a (BERT-type) language model into limited compute.
cuda-samples
Samples for CUDA Developers which demonstrates features in CUDA Toolkit
exllamav2
A fast inference library for running LLMs locally on modern consumer-class GPUs
flax
Flax is a neural network library for JAX that is designed for flexibility.
Friend
AI wearable with 24h+ battery
alpoge's Repositories
alpoge/accelerate
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
alpoge/alphafold3-pytorch
Implementation of Alphafold 3 in Pytorch
alpoge/backgammon
Command-line backgammon with a bot trained using reinforcement learning
alpoge/ComfyUI
The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
alpoge/core
:house_with_garden: Open source home automation that puts local control and privacy first.
alpoge/cramming
Cramming the training of a (BERT-type) language model into limited compute.
alpoge/cuda-samples
Samples for CUDA Developers which demonstrates features in CUDA Toolkit
alpoge/exllamav2
A fast inference library for running LLMs locally on modern consumer-class GPUs
alpoge/flax
Flax is a neural network library for JAX that is designed for flexibility.
alpoge/Friend
AI wearable with 24h+ battery
alpoge/gpt-neox
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
alpoge/hlb-gpt
Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wikitext-103 on a single A100 in <100 seconds. Scales to larger models with one parameter change (feature currently in alpha).
alpoge/jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
alpoge/llama.cpp
LLM inference in C/C++
alpoge/llm.c
LLM training in simple, raw C/CUDA
alpoge/nanoT5
Fast & Simple repository for pre-training and fine-tuning T5-style models
alpoge/open-gpu-kernel-modules
NVIDIA Linux open GPU with P2P support
alpoge/Open-Sora
Open-Sora: Democratizing Efficient Video Production for All
alpoge/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
alpoge/pytorch-lightning
Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
alpoge/stable-diffusion-webui
Stable Diffusion web UI
alpoge/SWE-agent
SWE-agent: Agent Computer Interfaces Enable Software Engineering Language Models
alpoge/tabbyAPI
An OAI compatible exllamav2 API that's both lightweight and fast
alpoge/text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
alpoge/tortoise-tts
A multi-voice TTS system trained with an emphasis on quality
alpoge/VoiceCraft
Zero-Shot Speech Editing and Text-to-Speech in the Wild