Pinned Repositories
LLaMA-Factory
Unify Efficient Fine-Tuning of 100+ LLMs
ChatGLM-6B
ChatGLM-6B:开源双语对话语言模型 | An Open Bilingual Dialogue Language Model
Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU部署 (Chinese LLaMA & Alpaca LLMs)
Deep-NLP
纸上得来终觉浅,绝知此事要躬行
deep-q-learning
Minimal Deep Q Learning (DQN & DDQN) implementations in Keras
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Megatron-LM
Ongoing research training transformer models at scale
ouc
Megatron-LM
Ongoing research training transformer models at scale
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
matrixssy's Repositories
matrixssy/Megatron-LM
Ongoing research training transformer models at scale
matrixssy/ChatGLM-6B
ChatGLM-6B:开源双语对话语言模型 | An Open Bilingual Dialogue Language Model
matrixssy/Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU部署 (Chinese LLaMA & Alpaca LLMs)
matrixssy/Deep-NLP
纸上得来终觉浅,绝知此事要躬行
matrixssy/deep-q-learning
Minimal Deep Q Learning (DQN & DDQN) implementations in Keras
matrixssy/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
matrixssy/ouc