/LLM-Finetuning

LLM Finetuning with peft

Primary LanguageJupyter Notebook

LLM-Finetuning

PEFT Fine-Tuning Project 🚀

Welcome to the PEFT (Pretraining-Evaluation Fine-Tuning) project repository! This project focuses on efficiently fine-tuning large language models using LoRA and Hugging Face's transformers library.

Fine Tuning Notebook Table 📑

Notebook Title Description Colab Badge
1. Efficiently Train Large Language Models with LoRA and Hugging Face Details and code for efficient training of large language models using LoRA and Hugging Face. Open in Colab
2. Fine-Tune Your Own Llama 2 Model in a Colab Notebook Guide to fine-tuning your Llama 2 model using Colab. Open in Colab
3. Guanaco Chatbot Demo with LLaMA-7B Model Showcase of a chatbot demo powered by LLaMA-7B model. Open in Colab
4. PEFT Finetune-Bloom-560m-tagger Project details for PEFT Finetune-Bloom-560m-tagger. Open in Colab
5. Finetune_Meta_OPT-6-1b_Model_bnb_peft Details and guide for finetuning the Meta OPT-6-1b Model using PEFT and Bloom-560m-tagger. Open in Colab
6.Finetune Falcon-7b with BNB Self Supervised Training Guide for finetuning Falcon-7b using BNB self-supervised training. Open in Colab
7.FineTune LLaMa2 with QLoRa Guide to fine-tune the Llama 2 7B pre-trained model using the PEFT library and QLoRa method Open in Colab
8.Stable_Vicuna13B_8bit_in_Colab Guide of Fine Tuning Vecuna 13B_8bit Open in Colab
9. GPT-Neo-X-20B-bnb2bit_training Guide How to train the GPT-NeoX-20B model using bfloat16 precision Open in Colab
10. MPT-Instruct-30B Model Training MPT-Instruct-30B is a large language model from MosaicML that is trained on a dataset of short-form instructions. It can be used to follow instructions, answer questions, and generate text. Open in Colab
11.RLHF_Training_for_CustomDataset_for_AnyModel How train a Model with RLHF training on any LLM model with custom dataset Open in Colab
12.Fine_tuning_Microsoft_Phi_1_5b_on_custom_dataset(dialogstudio) How train a model with trl SFT Training on Microsoft Phi 1.5 with custom Open in Colab
13. Finetuning OpenAI GPT3.5 Turbo How to finetune GPT 3.5 on your own data Open in Colab
14. Finetuning Mistral-7b FineTuning Model using Autotrain-advanced How to finetune Mistral-7b using autotrained-advanced Open in Colab
15. RAG LangChain Tutorial How to Use RAG using LangChain Open in Colab
16. Knowledge Graph LLM with LangChain PDF Question Answering How to build knowledge graph with pdf question answering Open in Colab

Contributing 🤝

Contributions are welcome! If you'd like to contribute to this project, feel free to open an issue or submit a pull request.

License 📝

This project is licensed under the MIT License.


Created with ❤️ by Ashish