Fine-Tune Alpaca For Any Language

In this repository, I've collected all the sources I used to create the YouTube video and the Medium article on fine-tuning the alpaca model for any language. You can find more details on how to do this in both articles.

Note: This repository is intended to provide additional material to the video. This means that you can't just clone this repository, run three commands, and the fine-tuning is done. This is mainly because the implementation of the Alpaca-LoRA repository is constantly being improved and changed, so it would be difficult to keep the Alpaca-LoRA repository files (which I have partially customized) up-to-date.

Translation

Run each cell in the translation notebook to translate the cleaned dataset into your target language. To do this, make sure you configure your target language and set up your auth_key for the DeepL API or OpenAI API.

In this file you can see all the tasks I translated, and in this file you can see all the tasks from the original dataset that I did not translate.

And these are my translated data sets that I used to fine-tune the Alpaca model:

Thanks to @JSmithOner for translating the whole dataset (52k tasks) to German using the Google Translator:

Fine-Tuning

python finetune.py --base_model="decapoda-research/llama-7b-hf" --data-path "translated_task_de_deepl_12k.json"

Evaluation

python generate_eval.py

You can see my evaluation results in this file or in my Medium article.

Trained Models (Hugging Face)