Pinned Repositories
Auto-GPT
An experimental open-source attempt to make GPT-4 fully autonomous.
ChatGLM-6B
ChatGLM-6B:开源双语对话语言模型 | An Open Bilingual Dialogue Language Model
ColossalAI
Making large AI models cheaper, faster and more accessible
Deep-Reinforcement-Learning
《深度强化学习:原理与实践》,Code of the book <Deep Reinforcement Learning: Principles and Practices>
DeepLearning-In-Action
《深度学习原理与实践》相关代码——source code of the book <deep learning in action>
DeepLearningSystem
Deep Learning System core principles introduction.
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
DeepSpeed-MII
MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
FastChat
The release repo for "Vicuna: An Open Chatbot Impressing GPT-4"
FasterTransformer
Transformer related optimization, including BERT, GPT
A-ML-ER's Repositories
A-ML-ER/Auto-GPT
An experimental open-source attempt to make GPT-4 fully autonomous.
A-ML-ER/ChatGLM-6B
ChatGLM-6B:开源双语对话语言模型 | An Open Bilingual Dialogue Language Model
A-ML-ER/ColossalAI
Making large AI models cheaper, faster and more accessible
A-ML-ER/Deep-Reinforcement-Learning
《深度强化学习:原理与实践》,Code of the book <Deep Reinforcement Learning: Principles and Practices>
A-ML-ER/DeepLearning-In-Action
《深度学习原理与实践》相关代码——source code of the book <deep learning in action>
A-ML-ER/DeepLearningSystem
Deep Learning System core principles introduction.
A-ML-ER/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
A-ML-ER/DeepSpeed-MII
MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
A-ML-ER/FastChat
The release repo for "Vicuna: An Open Chatbot Impressing GPT-4"
A-ML-ER/FasterTransformer
Transformer related optimization, including BERT, GPT
A-ML-ER/llama
Inference code for LLaMA models
A-ML-ER/TencentPretrain
Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
A-ML-ER/tensor_parallel
Automatically split your PyTorch models on multiple GPUs for training & inference
A-ML-ER/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.