faseehahmed26's Stars
AUTOMATIC1111/stable-diffusion-webui
Stable Diffusion web UI
microsoft/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
sebastianruder/NLP-progress
Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.
mudler/LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities
openai/evals
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
cocktailpeanut/dalai
The simplest way to run LLaMA on your local machine
BlinkDL/ChatRWKV
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
bigscience-workshop/petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
huggingface/text-generation-inference
Large Language Model Text Generation Inference
EleutherAI/gpt-neox
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
microsoft/DeepSpeedExamples
Example models using DeepSpeed
data-science-on-aws/data-science-on-aws
AI and Machine Learning with Kubeflow, Amazon EKS, and SageMaker
paperswithcode/galai
Model API for GALACTICA
IST-DASLab/gptq
Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
microsoft/DeepSpeed-MII
MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
google-research/FLAN
bigscience-workshop/Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
statmike/vertex-ai-mlops
Google Cloud Platform Vertex AI end-to-end workflows for machine learning operations
wgryc/phasellm
Large language model evaluation and workflow framework from Phase AI.
Xirider/finetune-gpt2xl
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
liangwq/Chatglm_lora_multi-gpu
chatglm多gpu用deepspeed和
windson/fastapi
FastAPI Tutorials & Deployment Methods to Cloud and on-prem infrastructures
X-jun-0130/LLM-Pretrain-FineTune
Deepspeed、LLM、Medical_Dialogue、医疗大模型、预训练、微调
HuangLK/transpeeder
train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism
intel/intel-extension-for-deepspeed
Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note XPU is already supported in stock DeepSpeed (upstream).
lxe/llama-tune
LLaMa Tuning with Stanford Alpaca Dataset using Deepspeed and Transformers
distable/core
The stable core is your personal server for AI rendering, powered with community plugins
Yusuf-YENICERI/ChatGPT-Like-Bot-On-Google-Collab
One Click Run ChatGPT like Bot
ashishpatel26/CheatSheet-LLM
The LLM (Language Model) Cheatsheet is a quick reference guide that provides an overview of the key concepts and techniques related to natural language processing (NLP) and language modeling. It is designed to be a helpful tool for both beginners and advanced practitioners in the field of NLP.