lphxx6222712's Stars
Significant-Gravitas/AutoGPT
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
nomic-ai/gpt4all
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
lm-sys/FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
microsoft/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
Vision-CAIR/MiniGPT-4
Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
tloen/alpaca-lora
Instruct-tune LLaMA on consumer hardware
Stability-AI/StableLM
StableLM: Stability AI Language Models
salesforce/LAVIS
LAVIS - A One-stop Library for Language-Vision Intelligence
togethercomputer/OpenChatKit
nichtdax/awesome-totally-open-chatgpt
A list of totally open alternatives to ChatGPT
Instruction-Tuning-with-GPT-4/GPT-4-LLM
Instruction Tuning with GPT-4
thunlp/PromptPapers
Must-read papers on prompt-based tuning for pre-trained language models.
Kent0n-Li/ChatDoctor
deep-diver/LLM-As-Chatbot
LLM as a Chatbot Service
bigscience-workshop/promptsource
Toolkit for creating, sharing and using natural language prompts.
PhoebusSi/Alpaca-CoT
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!
young-geng/EasyLM
Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
X-PLUG/mPLUG-Owl
mPLUG-Owl: The Powerful Multi-modal Large Language Model Family
atfortes/Awesome-LLM-Reasoning
Reasoning in Large Language Models: Papers and Resources, including Chain-of-Thought and OpenAI o1 🍓
hyperonym/basaran
Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
showlab/Image2Paragraph
[A toolbox for fun.] Transform Image into Unique Paragraph with ChatGPT, BLIP2, OFA, GRIT, Segment Anything, ControlNet.
kbressem/medAlpaca
LLM finetuned for medical question answering
bigscience-workshop/biomedical
Tools for curating biomedical training data for large-scale language modeling
cccntu/minLoRA
minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.
atfortes/Awesome-Controllable-Diffusion
Papers and resources on Controllable Generation using Diffusion Models, including ControlNet, DreamBooth, IP-Adapter.
cambridgeltl/visual-med-alpaca
Visual Med-Alpaca is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the LLaMa-7B.
BMEII-AI/RadImageNet
RadImageNet, a pre-trained convolutional neural networks trained solely from medical imaging to be used as the basis of transfer learning for medical imaging applications.
Alibaba-MIIL/ML_Decoder
Official PyTorch implementation of "ML-Decoder: Scalable and Versatile Classification Head" (2021)