viceboogie's Stars
OpenBMB/ChatDev
Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
huggingface/tokenizers
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
open-neuromorphic/open-neuromorphic
List of open source neuromorphic projects: SNN training frameworks, DVS handling routines and so on.
serp-ai/ChatLLaMA-and-ChatGPT-Desktop-App
A desktop application written in PyQT5 (python). Has support for using openai chatGPT as well as using a locally running llama model. Local has support for inferencing in 8-bit as well as 4/3/2 bit inferencing (model must already be quantized).
FMInference/FlexLLMGen
Running large language models on a single GPU for throughput-oriented scenarios.
bitsandbytes-foundation/bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
tqdm/tqdm
:zap: A Fast, Extensible Progress Bar for Python and CLI
HackerAIOfficial/simple-llama-finetuner
Simple UI cli LLaMA Model Finetuning
microsoft/LoRA
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Lightning-AI/pytorch-lightning
Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
bigscience-workshop/petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
oobabooga/text-generation-webui
A Gradio web UI for Large Language Models.
zylon-ai/private-gpt
Interact with your documents using the power of GPT, 100% privately, no data leaks
mar-muel/artificial-self-AMLD-2020
Workshop material for the AMLD 2020 workshop on "Meet your Artificial Self: Generate text that sounds like you"
edgar971/open-chat
A self-hosted, offline, ChatGPT-like chatbot with different LLM support. 100% private, with no data leaving your device.
ParisNeo/lollms
Lord of LLMS
nlpxucan/WizardLM
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
davidadamojr/TextRank
Python implementation of TextRank algorithm for automatic keyword extraction and summarization using Levenshtein distance as relation between text units. This project is based on the paper "TextRank: Bringing Order into Text" by Rada Mihalcea and Paul Tarau. https://web.eecs.umich.edu/~mihalcea/papers/mihalcea.emnlp04.pdf
meta-llama/llama-recipes
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama for WhatsApp & Messenger.
meta-llama/llama
Inference code for Llama models
lowerquality/gentle
gentle forced aligner
drethage/speech-denoising-wavenet
A neural network for end-to-end speech denoising
kan-bayashi/ParallelWaveGAN
Unofficial Parallel WaveGAN (+ MelGAN & Multi-band MelGAN & HiFi-GAN & StyleMelGAN) with Pytorch
tomlepaine/fast-wavenet
Speedy Wavenet generation using dynamic programming :zap:
r9y9/wavenet_vocoder
WaveNet vocoder
vincentherrmann/pytorch-wavenet
An implementation of WaveNet with fast generation
ibab/tensorflow-wavenet
A TensorFlow implementation of DeepMind's WaveNet paper
Azure-Samples/azure-search-openai-demo
A sample app for the Retrieval-Augmented Generation pattern running in Azure, using Azure AI Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences.
Azure-Samples/Cognitive-Speech-TTS
Microsoft Text-to-Speech API sample code in several languages, part of Cognitive Services.