Pinned Repositories
2021.0234
AdaLoRA
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).
alpa
Training and serving large-scale neural networks
alpaca-lora
Instruct-tune LLaMA on consumer hardware
AlpacaDataCleaned
Alpaca dataset from Stanford, cleaned and curated
Anima
第一个开源的基于QLoRA的33B中文大语言模型First QLoRA based open source 33B Chinese LLM
Auto-GPT
An experimental open-source attempt to make GPT-4 fully autonomous.
Auto-GPT-ZH
Auto-GPT中文版本及爱好者组织 同步更新原项目 AI领域创业 自媒体组织 用AI工作学习创作变现
Baichuan-13B
A 13B large language model developed by Baichuan Intelligent Technology
BBT-FinCUGE-Applications
Blue0rigin's Repositories
Blue0rigin/Anima
第一个开源的基于QLoRA的33B中文大语言模型First QLoRA based open source 33B Chinese LLM
Blue0rigin/Baichuan-13B
A 13B large language model developed by Baichuan Intelligent Technology
Blue0rigin/ChatGLM2-6B
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
Blue0rigin/ChatLaw
中文法律大模型
Blue0rigin/chroma
the AI-native open-source embedding database
Blue0rigin/CSrankings
A web app for ranking computer science departments according to their research output in selective venues, and for finding active faculty across a wide range of areas.
Blue0rigin/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Blue0rigin/docker-llama2-chat
Play LLaMA2 (official / 中文版 / INT4 / llama2.cpp) Together! ONLY 3 STEPS! ( non GPU / 5GB vRAM / 8~14GB vRAM)
Blue0rigin/DragGAN
Official Code for DragGAN (SIGGRAPH 2023)
Blue0rigin/FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and FastChat-T5.
Blue0rigin/FastEdit
🩹Editing large language models within 10 seconds⚡
Blue0rigin/Fengshenbang-LM
Fengshenbang-LM(封神榜大模型)是IDEA研究院认知计算与自然语言研究中心主导的大模型开源体系,成为中文AIGC和认知智能的基础设施。
Blue0rigin/gpt-engineer
Specify what you want it to build, the AI asks for clarification, and then builds it.
Blue0rigin/HolisticPU
Beyond Myopia: Learning from Positive and Unlabeled Data through Holistic Predictive Trends [NeurIPS 2023]
Blue0rigin/inseq
Interpretability for sequence generation models 🐛 🔍
Blue0rigin/llama2.c
Inference Llama 2 in one file of pure C
Blue0rigin/MedQA-ChatGLM
🛰️ 基于真实医疗对话数据在ChatGLM上进行LoRA、P-Tuning V2、Freeze、RLHF等微调,我们的眼光不止于医疗问答
Blue0rigin/paper2gui
Convert AI papers to GUI,Make it easy and convenient for everyone to use artificial intelligence technology。让每个人都简单方便的使用前沿人工智能技术
Blue0rigin/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Blue0rigin/PLM
Blue0rigin/privateGPT
Interact privately with your documents using the power of GPT, 100% privately, no data leaks
Blue0rigin/promptbench
A robustness evaluation framework for large language models on adversarial prompts
Blue0rigin/pyro
Deep universal probabilistic programming with Python and PyTorch
Blue0rigin/qlora
QLoRA: Efficient Finetuning of Quantized LLMs
Blue0rigin/RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Blue0rigin/SCARCE
[ICML 2024] Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical
Blue0rigin/SSLRec
SSLRec: A Self-Supervised Learning Library for Recommendation
Blue0rigin/text-generation-webui
A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.
Blue0rigin/torchkeras
Pytorch❤️ Keras 😋😋
Blue0rigin/WizardLM
Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder