Pinned Repositories
AAAI2019
AC297r_2019_Kensho
Template for AC297r projects
accelerate
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
Auto-PyTorch
Automatic architecture search and hyperparameter optimization for PyTorch
ChatGPT-Decoded-GPT2-FAQ-Bot-RLHF-PPO
A Practical Guide to Developing a Reliable FAQ Chatbot with Reinforcement Learning and Human Feedback using GPT-2 on AWS
EnhanceKGEmbedding
Few-shot-learning-NLP-Tool
NLPTool
This is my own NLP tool
pumpkin-book
《机器学习》(西瓜书)公式推导解析,在线阅读地址:https://datawhalechina.github.io/pumpkin-book
transmomo.pytorch
This is the official PyTorch implementation of the CVPR 2020 paper "TransMoMo: Invariance-Driven Unsupervised Video Motion Retargeting".
gaohuan2015's Repositories
gaohuan2015/aviary
Ray Aviary - evaluate multiple LLMs easily
gaohuan2015/bisheng
Bisheng is an open LLM devops platform for next generation AI applications.
gaohuan2015/Chinese-Llama-2-7b
开源社区第一个能下载、能运行的中文 LLaMA2 模型!
gaohuan2015/CodeFormer
[NeurIPS 2022] Towards Robust Blind Face Restoration with Codebook Lookup Transformer
gaohuan2015/deep-rl-class
This repo contains the syllabus of the Hugging Face Deep Reinforcement Learning Course.
gaohuan2015/distributed-llama
Run LLMs on weak devices or make powerful devices even more powerful by distributing the workload and dividing the RAM usage.
gaohuan2015/lit-gpt
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT. Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
gaohuan2015/LLaMA-Efficient-Tuning
Easy-to-use LLM fine-tuning framework (LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, ChatGLM2)
gaohuan2015/Llama2-Chinese
Llama中文社区,最好的中文Llama大模型,完全开源可商用
gaohuan2015/llama2.c
Inference Llama 2 in one file of pure C
gaohuan2015/llm-awq
AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
gaohuan2015/llm-colosseum
Benchmark LLMs by fighting in Street Fighter 3! The new way to evaluate the quality of an LLM
gaohuan2015/LLM-RLHF-Tuning
LLM Tuning with PEFT (SFT+RM+PPO+DPO with LoRA)
gaohuan2015/LLM-Tuning
Tuning LLMs with no tears💦, sharing LLM-tools with love❤️.
gaohuan2015/MedicalGPT
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现包括二次预训练、有监督微调、奖励建模、强化学习训练。
gaohuan2015/MetaGPT
🌟 The Multi-Agent Meta Programming Framework: Given one line Requirement, return PRD, Design, Tasks, Repo
gaohuan2015/mistral-src
gaohuan2015/MOSS-RLHF
MOSS-RLHF
gaohuan2015/Mr.-Ranedeer-AI-Tutor
A GPT-4 AI Tutor Prompt for customizable personalized learning experiences.
gaohuan2015/MS-AMP
Microsoft Automatic Mixed Precision Library
gaohuan2015/open-interpreter
OpenAI's Code Interpreter in your terminal, running locally
gaohuan2015/Open-Sora-Plan
This project aim to reproducing Sora (Open AI T2V model), but we only have limited resource. We deeply wish the all open source community can contribute to this project.
gaohuan2015/opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (LLaMA, LLaMa2, ChatGLM2, ChatGPT, Claude, etc) over 50+ datasets.
gaohuan2015/othello_world
Emergent world representations: Exploring a sequence model trained on a synthetic task
gaohuan2015/pykoi
pykoi: Active learning in one unified interface
gaohuan2015/smoothquant
[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
gaohuan2015/transformer-debugger
gaohuan2015/unsloth
5X faster 60% less memory QLoRA finetuning
gaohuan2015/wav2lip_288x288
gaohuan2015/zero_nlp
中文nlp解决方案(大模型、数据、模型、训练、推理)