opcoder's Stars
CompVis/stable-diffusion
A latent text-to-image diffusion model
binary-husky/gpt_academic
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, moss等。
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
svc-develop-team/so-vits-svc
SoftVC VITS Singing Voice Conversion
Vision-CAIR/MiniGPT-4
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
triton-lang/triton
Development repository for the Triton language and compiler
RUCAIBox/LLMSurvey
The official GitHub page for the survey paper "A Survey of Large Language Models".
facebookresearch/dinov2
PyTorch code and models for the DINOv2 self-supervised learning method.
togethercomputer/OpenChatKit
nebuly-ai/nebuly
The user analytics platform for LLMs
THUDM/CodeGeeX
CodeGeeX: An Open Multilingual Code Generation Model (KDD 2023)
Farama-Foundation/Gymnasium
An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
openai/guided-diffusion
openai/consistency_models
Official repo for consistency models.
promptslab/Awesome-Prompt-Engineering
This repository contains a hand-curated resources for Prompt Engineering with a focus on Generative Pre-trained Transformer (GPT), ChatGPT, PaLM etc
mlfoundations/open_flamingo
An open-source framework for training large multimodal models.
GuyTevet/motion-diffusion-model
The official PyTorch implementation of the paper "Human Motion Diffusion Model"
qwopqwop200/GPTQ-for-LLaMa
4 bits quantization of LLaMA using GPTQ
ModelTC/lightllm
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
FasterDecoding/Medusa
Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads
Maks-s/sd-akashic
A compendium of informations regarding Stable Diffusion (SD)
ELS-RD/kernl
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
dbolya/tomesd
Speed up Stable Diffusion with this one simple trick!
mit-han-lab/smoothquant
[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
microsoft/HydraLab
Intelligent cloud testing made easy.
kuleshov-group/llmtools
Finetuning Large Language Models on One Consumer GPU in 2 Bits
ENOT-AutoDL/onnx2torch
Convert ONNX models to PyTorch.
google/pyglove
Manipulating Python Programs
ModelTC/United-Perception
United Perception
Guangxuan-Xiao/torch-int
This repository contains integer operators on GPUs for PyTorch.