Pinned Repositories
alacritty
A cross-platform, OpenGL terminal emulator.
awesome-zero-knowledge-proofs
A curated list of awesome things related to learning Zero-Knowledge Proofs (ZKP).
cpplab
dotfiles
Configuration for Arch Linux, Hyprland, kitty, kakoune, zsh and more
gcc
github-slideshow
A robot powered training repository :robot:
glibc
GNU Libc - Extremely old repo used for research purposes years ago. Please do not rely on this repo.
linux
Linux kernel source tree
llama.cpp.kernels
kernels in llama.cpp
nvim
Structure, documented, super fast neovim configuration. 可能是翻斗花园最好用的 neovim 配置[^1]。
KyeeHuang's Repositories
KyeeHuang/llama.cpp.kernels
kernels in llama.cpp
KyeeHuang/alacritty
A cross-platform, OpenGL terminal emulator.
KyeeHuang/awesome-zero-knowledge-proofs
A curated list of awesome things related to learning Zero-Knowledge Proofs (ZKP).
KyeeHuang/cpplab
KyeeHuang/dotfiles
Configuration for Arch Linux, Hyprland, kitty, kakoune, zsh and more
KyeeHuang/gcc
KyeeHuang/github-slideshow
A robot powered training repository :robot:
KyeeHuang/glibc
GNU Libc - Extremely old repo used for research purposes years ago. Please do not rely on this repo.
KyeeHuang/Human-detection-and-Tracking
Human-detection-and-Tracking
KyeeHuang/linux
Linux kernel source tree
KyeeHuang/nvim
Structure, documented, super fast neovim configuration. 可能是翻斗花园最好用的 neovim 配置[^1]。
KyeeHuang/qq-linux
KyeeHuang/x1c6-hackintosh
READMEs, Clover configurations, patches, and notes for the Thinkpad X1 Carbon 6th Gen Hackintosh
KyeeHuang/llama.cpp
LLM inference in C/C++
KyeeHuang/mit-jos
mit jos lab
KyeeHuang/pika
Pika is a Redis-Compatible database developed by Qihoo's infrastructure team.
KyeeHuang/redis
Redis is an in-memory database that persists on disk. The data model is key-value, but many different kind of values are supported: Strings, Lists, Sets, Sorted Sets, Hashes, Streams, HyperLogLogs, Bitmaps.
KyeeHuang/spdk
Storage Performance Development Kit
KyeeHuang/twemproxy
A fast, light-weight proxy for memcached and redis
KyeeHuang/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs