Tim-Chard's Stars
shadcn-ui/ui
Beautifully designed components that you can copy and paste into your apps. Accessible. Customizable. Open Source.
pocketbase/pocketbase
Open Source realtime backend in 1 file
lm-sys/FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
microsoft/autogen
A programming framework for agentic AI 🤖
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
MonitorControl/MonitorControl
🖥 Control your display's brightness & volume on your Mac as if it was a native Apple Display. Use Apple Keyboard keys or custom shortcuts. Shows the native macOS OSDs.
plausible/analytics
Simple, open source, lightweight (< 1 KB) and privacy-friendly web analytics alternative to Google Analytics.
codelucas/newspaper
newspaper3k is a news, full-text, and article metadata extraction in Python 3. Advanced docs:
triton-lang/triton
Development repository for the Triton language and compiler
meta-llama/llama-recipes
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama for WhatsApp & Messenger.
dagster-io/dagster
An orchestration platform for the development, production, and observation of data assets.
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
NVIDIA/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
jzhang38/TinyLlama
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
joerick/pyinstrument
🚴 Call stack profiler for Python. Shows you why your code is slow!
google/gemma.cpp
lightweight, standalone C++ inference engine for Google's Gemma models.
pytorch-labs/gpt-fast
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
google/gemma_pytorch
The official PyTorch implementation of Google's Gemma models
allenai/OLMo
Modeling, training, eval, and inference code for OLMo
buriy/python-readability
fast python port of arc90's readability tool, updated to match latest readability.js!
StractOrg/stract
web search done right
labmlai/labml
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱
autodistill/autodistill
Images to inference with no labeling (use foundation models to train supervised models).
P403n1x87/austin
Python frame stack sampler for CPython
allenai/open-instruct
brucemiller/LaTeXML
LaTeXML: a TeX and LaTeX to XML/HTML/ePub/MathML translator.
dginev/ar5iv
A web service offering HTML5 articles from arXiv.org as converted with latexml
sanderwood/bgpt
Beyond Language Models: Byte Models are Digital World Simulators
koaning/memo
Decorators that logs stats.
dginev/ar5ivist
A turnkey command for converting a LaTeX source to ar5iv-style HTML