/MultilingualSIFT

MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning

Primary LanguagePythonApache License 2.0Apache-2.0

Multilingual Supervised Instruction Fine-tuning

This repo aims to provide the data, models, evaluation benchmark for multilingual instruction fine-tuning.

📚 Data

We translate Alpaca-GPT4 and Evol-Instruct from English to languages using GPT-3.5 Turbo, where

  • For Alpaca-GPT4, we directly translate the instructions and responses.
  • For Evol-Instruct, we translate the instructions and use to generate the responses using the translated instructions.
  • For ShareGPT, we translate the English data from ShareGPT to other languages (Note: Due to the large scale of ShareGPT, we have yet to translate all the data).
Language Alpaca-GPT4 Evol-instruct ShareGPT
Chinese [huggingface] [huggingface] [huggingface]
Japanese [huggingface] [huggingface] [huggingface]
Korean [huggingface] [huggingface] [huggingface]
German [huggingface] [huggingface] [huggingface]
French [huggingface] [huggingface] [huggingface]
Italian [huggingface] [huggingface] [huggingface]
Arabic [huggingface] [huggingface] [huggingface]
Portuguese [huggingface] [huggingface] [huggingface]
Spanish [huggingface] [huggingface] [huggingface]
Hindi [huggingface] [huggingface] [huggingface]
Indonesian [huggingface] [huggingface] [huggingface]

🤖 Models

CLI Interation

python -m src.deploy.cli --model-path /path/to/weights/

For example, you can use FreedomIntelligence/phoenix-multiple-langs-v1 fine-tuned on eight languages (English, Chinese, French, Spanish, Portuguese, Arabic, Indonesian, Hindi):

python -m src.deploy.cli --model-path FreedomIntelligence/phoenix-multiple-langs-v1

Deployment

  1. Launch a controller
python -m src.deploy.webapp.controller
  1. Launch a model worker
python -m src.deploy.webapp.model_worker --model-path /path/to/weights/
  1. Launch a gradio web server
python -m src.deploy.webapp.gradio_web_server

Now, you can open your browser and chat with a model.

Training

Specify the train_data_path and val_data_path and then run

bash scripts/train.sh

💯 Evaluation Benchmark

Evaluation Data

  • We translate MMLU and Vicuna-80 to languages above for evaluation.
Language MMLU
Chinese [huggingface]
Japanese [huggingface]
Korean [huggingface]
German [huggingface]
French [huggingface]
Italian [huggingface]
Arabic [huggingface]
Portuguese [huggingface]
Spanish [huggingface]
Hindi [huggingface]
Indonesian [huggingface]

Evaluation

  • For MMLU
bash scripts/eval_mmlu.sh ${LANGUAGE} ${MODEL_PATH} ${MODEL_ID}
  • For Vicuna-80
bash scripts/eval_vicuna-80.sh ${LANGUAGE} ${MODEL_PATH} ${MODEL_ID}

Citation

If you find this repository helpful, please cite the repository below.

@software{Chen_MultilingualSIFT_Multilingual_Supervised_2023,
author = {Chen, Zhihong and Yan, Shuo and Liang, Juhao and Jiang, Feng and Wu, Xiangbo and Yu, Fei and Chen, Guiming Hardy and Chen, Junying and Zhang, Hongbo and Li Jianquan and Wan Xiang and Wang, Benyou},
month = jul,
title = {{MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning}},
url = {https://github.com/FreedomIntelligence/MultilingualSIFT.git},
version = {0.1},
year = {2023}
}