glarsson's Stars
martinsprojects/covid19-screensaver
COVID-19 themed screensaver for Linux inspired by the classic flying toasters screensaver.
yarik2720/Synergy-SM
Maps fixes for Synergy Mod
anselale/Dignity
Mozilla-Ocho/llamafile
Distribute and run LLMs with a single file.
b4rtaz/distributed-llama
Tensor parallelism is all you need. Run LLMs on an AI cluster at home using any device. Distribute the workload, divide RAM usage, and increase inference speed.
IST-DASLab/marlin
FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.
daswer123/xtts-api-server
A simple FastAPI Server to run XTTSv2
Mobile-Artificial-Intelligence/maid
Maid is a cross-platform Flutter app for interfacing with GGUF / llama.cpp models locally, and with Ollama and OpenAI models remotely.
fishaudio/fish-speech
SOTA Open Source TTS
myshell-ai/OpenVoice
Instant voice cloning by MIT and MyShell.
vladmandic/automatic
SD.Next: All-in-one for AI generative image
SJTU-IPADS/PowerInfer
High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
suno-ai/bark
🔊 Text-Prompted Generative Audio Model
yl4579/StyleTTS2
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
erew123/alltalk_tts
AllTalk is based on the Coqui TTS engine, similar to the Coqui_tts extension for Text generation webUI, however supports a variety of advanced features, such as a settings page, low VRAM support, DeepSpeed, narrator, model finetuning, custom models, wav file maintenance. It can also be used with 3rd Party software via JSON calls.
coqui-ai/TTS
🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
abybaddi009/albus
Locally hosted AI code completion plugin for Visual Studio Code
kmccleary3301/QueryLake
kmccleary3301/QueryLakeBackend
pkuliyi2015/multidiffusion-upscaler-for-automatic1111
Tiled Diffusion and VAE optimize, licensed under CC BY-NC-SA 4.0
ggerganov/whisper.cpp
Port of OpenAI's Whisper model in C/C++
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
stsaz/phiola
Fast audio player, recorder, converter for Windows, Linux & Android
letta-ai/letta
Letta (formerly MemGPT) is a framework for creating LLM services with memory.
abetlen/llama-cpp-python
Python bindings for llama.cpp
monatis/clip.cpp
CLIP inference in plain C/C++ with no extra dependencies
PowerShell/PowerShell
PowerShell for every system!
monatis/lmm.cpp
Inference of Large Multimodal Models in C/C++. LLaVA and others
glarsson/openai-whisper-test1
stsaz/fmedia
fast audio player/recorder/converter