dnhkng's Stars
ggerganov/llama.cpp
LLM inference in C/C++
facebookresearch/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
karpathy/nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
tloen/alpaca-lora
Instruct-tune LLaMA on consumer hardware
Stability-AI/StableLM
StableLM: Stability AI Language Models
anderspitman/awesome-tunneling
List of ngrok/Cloudflare Tunnel alternatives and other tunneling software and services. Focus on self-hosting.
IDEA-Research/Grounded-Segment-Anything
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
chroma-core/chroma
the AI-native open-source embedding database
BlinkDL/RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
antimatter15/alpaca.cpp
Locally run an Instruction-Tuned Chat-Style LLM
BlinkDL/ChatRWKV
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
Lightning-AI/lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
guillaumekln/faster-whisper
Faster Whisper transcription with CTranslate2
serge-chat/serge
A web interface for chatting with Alpaca through llama.cpp. Fully dockerized, with an easy to use API.
aaronwangy/Data-Science-Cheatsheet
A helpful 5-page machine learning cheatsheet to assist with exam reviews, interview prep, and anything in-between.
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features from various papers
togethercomputer/RedPajama-Data
The RedPajama-Data repository contains code for preparing large datasets for training large language models.
dnhkng/GlaDOS
This is the Personality Core for GLaDOS, the first steps towards a real-life implementation of the AI from the Portal series by Valve.
turboderp/exllama
A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
mckaywrigley/paul-graham-gpt
Come join the best place on the internet to learn AI skills. Use code "paulgrahamgpt" for an extra 20% off.
vsitzmann/awesome-implicit-representations
A curated list of resources on implicit neural representations.
wiseman/py-webrtcvad
Python interface to the WebRTC Voice Activity Detector
sahil280114/codealpaca
whitphx/stlite
In-browser Streamlit 🎈🚀
tloen/llama-int8
Quantized inference code for LLaMA models
rhasspy/larynx
End to end text to speech system using gruut and onnx
Qengineering/Jetson-Nano-Ubuntu-20-image
Jetson Nano with Ubuntu 20.04 image
rbbrdckybk/dream-factory
Multi-threaded GUI manager for mass creation of AI-generated art with support for multiple GPUs.
catid/supercharger
Supercharge Open-Source AI Models
NVIDIA-AI-IOT/jetson_dla_tutorial
A tutorial for getting started with the Deep Learning Accelerator (DLA) on NVIDIA Jetson