aniket-mish's Stars
DiceDB/dice
DiceDB is a redis-compliant, in-memory, real-time, and reactive database optimized for modern hardware and building and scaling truly real-time applications.
ben-n93/SQL-tips-and-tricks
SQL tips and tricks
Future-House/paper-qa
High accuracy RAG for answering questions from scientific documents with citations
MadcowD/ell
A language model programming library.
fastapi/fastapi
FastAPI framework, high performance, easy to learn, fast to code, ready for production
anthropics/courses
Anthropic's educational courses
NousResearch/DisTrO
Distributed Training Over-The-Internet
dair-ai/ML-Papers-of-the-Week
🔥Highlighting the top ML papers every week.
sgl-project/sglang
SGLang is a fast serving framework for large language models and vision language models.
linkedin/Liger-Kernel
Efficient Triton Kernels for LLM Training
pytorch/serve
Serve, optimize and scale PyTorch models in production
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
vsmolyakov/ml_algo_in_depth
ML algorithms in depth
SakanaAI/AI-Scientist
The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery 🧑🔬
arcee-ai/mergekit
Tools for merging pretrained large language models.
AnswerDotAI/fastsql
AnswerDotAI/FastHTML-Gallery
endia-org/Endia
Arrays, Tensors and dynamic Neural Networks in Mojo 🔥
asg017/sqlite-vec
A vector search SQLite extension that runs anywhere!
djhworld/simple-computer
the scott CPU from "But How Do It Know?" by J. Clark Scott
black-forest-labs/flux
Official inference repo for FLUX.1 models
AnswerDotAI/cold-compress
Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of GPT-Fast, a simple, PyTorch-native generation codebase.
karpathy/nano-llama31
nanoGPT style version of Llama 3.1
Laz4rz/GPT-2
Following master Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish
AnswerDotAI/fh-deploy
Deployment examples for FastHTML
pytorch/torchchat
Run PyTorch LLMs locally on servers, desktop and mobile
dottxt-ai/prompts
A prompting library
kvcache-ai/ktransformers
A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations
EurekaLabsAI/tensor
The Tensor (or Array)
MDK8888/vllmini
A minimal implementation of vllm.