Pinned Repositories
accelerate
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
annotated_deep_learning_paper_implementations
🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
Awesome-Reasoning-Foundation-Models
✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models
ChatGLM-6B
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
eat_pytorch_in_20_days
Pytorch🍊🍉 is delicious, just eat it! 😋😋
eat_tensorflow2_in_30_days
Tensorflow2.0 🍎🍊 is delicious, just eat it! 😋😋
GLM
GLM (General Language Model)
gpt-2
Code for the paper "Language Models are Unsupervised Multitask Learners"
muyunchengbi's Repositories
muyunchengbi/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
muyunchengbi/opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
muyunchengbi/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
muyunchengbi/LLMSurvey
The official GitHub page for the survey paper "A Survey of Large Language Models".
muyunchengbi/Awesome-Reasoning-Foundation-Models
✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models
muyunchengbi/mamba
muyunchengbi/LLaMA-Factory
Easy-to-use LLM fine-tuning framework (LLaMA, BLOOM, Mistral, Baichuan, Qwen, ChatGLM)
muyunchengbi/MedicalGPT
MedicalGPT: Training Your Own Medical GPT Model with ChatGPT Training Pipeline. 训练医疗大模型,实现了包括增量预训练、有监督微调、RLHF(奖励建模、强化学习训练)和DPO(直接偏好优化)。
muyunchengbi/Llama2-Chinese
Llama中文社区,最好的中文Llama大模型,完全开源可商用
muyunchengbi/llama
Inference code for LLaMA models
muyunchengbi/RETFound_MAE
RETFound - A foundation model for retinal image
muyunchengbi/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
muyunchengbi/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
muyunchengbi/accelerate
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
muyunchengbi/OpenBioMed
muyunchengbi/torchkeras
Pytorch❤️ Keras 😋😋
muyunchengbi/mmdetection
OpenMMLab Detection Toolbox and Benchmark
muyunchengbi/ChatGLM-6B
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
muyunchengbi/annotated_deep_learning_paper_implementations
🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
muyunchengbi/Python-100-Days
Python - 100天从新手到大师
muyunchengbi/eat_pytorch_in_20_days
Pytorch🍊🍉 is delicious, just eat it! 😋😋
muyunchengbi/gpt-2
Code for the paper "Language Models are Unsupervised Multitask Learners"
muyunchengbi/GLM
GLM (General Language Model)
muyunchengbi/eat_tensorflow2_in_30_days
Tensorflow2.0 🍎🍊 is delicious, just eat it! 😋😋