Pinned Repositories
bazix
Nix + Bazel = 🥰
editor
neovim configuration where I spend way too much time on
hack-gigabyte
Gigabyte Aero 15W with a spice of MacOS Mojave
morph
exploration WYSIWYG editor
whispercpp
Pybind11 bindings for Whisper.cpp
BentoML
The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!
OpenLLM
Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.
quartz
🌱 a fast, batteries-included static-site generator that transforms Markdown content into fully functional websites
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
avante.nvim
Use your Neovim like using Cursor AI IDE!
aarnphm's Repositories
aarnphm/whispercpp
Pybind11 bindings for Whisper.cpp
aarnphm/morph
exploration WYSIWYG editor
aarnphm/editor
neovim configuration where I spend way too much time on
aarnphm/aarnphm.github.io
aarnphm/dix
dotfiles + nix = dix
aarnphm/aarnphm
aarnphm/emulators
a hazard place for all kind of terminal
aarnphm/sites
partial source of aarnphm[dot]xyz [GENERATED DO NOT EDIT]
aarnphm/ayamir-nvimdots
A well configured and structured Neovim.
aarnphm/BentoML
Model Serving Made Easy
aarnphm/pdnfa
playground for building DFAs, NFAs, PDAs.
aarnphm/advent
aoc
aarnphm/avante.nvim
Use your Neovim like using Cursor AI IDE!
aarnphm/ec2-github-runner
A fork of https://github.com/machulav/ec2-github-runner but with unattended run
aarnphm/inference
testing some fast shi*
aarnphm/LazyVim
Neovim config for the lazy
aarnphm/llguidance
Super-fast Structured Outputs
aarnphm/luasnip-latex-snippets.nvim
A port of Gilles Castel's UltiSnip snippets for LuaSnip.
aarnphm/mini.nvim
Library of 40+ independent Lua modules improving overall Neovim (version 0.8 and higher) experience with minimal effort
aarnphm/nix-homebrew
Homebrew installation manager for nix-darwin
aarnphm/obsidian.nvim
Obsidian 🤝 Neovim
aarnphm/rehype-citation
Rehype plugin to add citation and bibliography from bibtex files
aarnphm/s3-lfs-proxy
R2 LFS proxy server because the default Git LFS is too slow
aarnphm/sglang
SGLang is a fast serving framework for large language models and vision language models.
aarnphm/stake
we have stake at home
aarnphm/tfwr
the farmer was replaced
aarnphm/tinygrad
You like pytorch? You like micrograd? You love tinygrad! ❤️
aarnphm/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
aarnphm/vllm-blog-source
aarnphm/vllm-project.github.io