Pinned Repositories
Auto-GPT
An experimental open-source attempt to make GPT-4 fully autonomous.
AutoAWQ
AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:
fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
LLaMA-Factory
Unify Efficient Fine-Tuning of 100+ LLMs
Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
modelzoo
nmt
TensorFlow Neural Machine Translation Tutorial
pytorch-lightning
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.
Skywork
Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sourced the model, training data, evaluation data, evaluation methods, etc. 天工系列模型在3.2TB高质量多语言和代码数据上进行预训练。我们开源了模型参数,训练数据,评估数据,评估方法。
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
chuanmingliu's Repositories
chuanmingliu/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
chuanmingliu/Auto-GPT
An experimental open-source attempt to make GPT-4 fully autonomous.
chuanmingliu/AutoAWQ
AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:
chuanmingliu/fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
chuanmingliu/LLaMA-Factory
Unify Efficient Fine-Tuning of 100+ LLMs
chuanmingliu/Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
chuanmingliu/modelzoo
chuanmingliu/nmt
TensorFlow Neural Machine Translation Tutorial
chuanmingliu/pytorch-lightning
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.
chuanmingliu/Skywork
Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sourced the model, training data, evaluation data, evaluation methods, etc. 天工系列模型在3.2TB高质量多语言和代码数据上进行预训练。我们开源了模型参数,训练数据,评估数据,评估方法。
chuanmingliu/template-node-express
chuanmingliu/test
this is a test repo
chuanmingliu/transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.