OpenLLaMA-Chinese is a 100% free Chinese large language model, and can be utilized for both non-commercial and commercial purposes.
OpenLLaMA-Chinese is built on OpenLLaMA, which is a permissively licensed open-source reproduction of Meta AI's LLaMA 7B and 13B models, trained on the RedPajama dataset. OpenLLaMA also includes a smaller 3B variant of the LLaMA model. We have conducted fine-tuning on Chinese and English instructions using the OpenLLaMA base models and have made our weights publicly available.
- OpenLLaMA 3B
- OpenLLaMA 7B
- [OpenLLaMA 13B](coming soon!)
- [OpenLLaMA 3B](coming soon!)
- [OpenLLaMA 7B](coming soon!)
- [OpenLLaMA 13B](coming soon!)
- OpenLLaMA 3B
- OpenLLaMA 7B
- [OpenLLaMA 13B](coming soon!)
For Chinese fine-tuning, we utilized the alpaca_data_zh_51k.json from the Chinese-LLaMA-Alpaca project.
For English fine-tuning, we employed the alpaca_data.json from the StanfordAlpaca project.
For fine-tuning with both English and Chinese instructions, we used data from both sources.
We provide inference code based on Jittor and PyTorch
coming soon.
We modified the generate code from LLaMA-X.
To use the PyTorch inference code, follow these steps:
- Download the weights and update the base_model path in inference/gen_torch.py.
- Run the following command:
python inference/gen_torch.py
FittenTech offers LLMs pretraining and fine-tuning services. For more details, please visit https://llm.fittentech.com/.
We would like to express our gratitude to the developers of the following open-source projects, as our project builds upon their work:
We adopt the Apache License, following OpenLLaMA's license.