/LLM-FineTuning-Large-Language-Models

LLM (Large Language Model) FineTuning

Primary LanguageJupyter Notebook

Multiple LLM (Large Language Models) FineTuning Projects

For almost all of these I have detailed video in my YouTube Channel

Youtube Link

Find me here..


Fine-tuning LLM (and YouTube Video Explanations)

Notebook YouTube_Video
CodeLLaMA-34B - Conversational Agent Youtube Link
Inference Yarn-Llama-2-13b-128k with KV Cache to answer quiz on very long textbook Youtube Link
Mistral 7B FineTuning with_PEFT and QLORA Youtube Link
Falcon finetuning on openassistant-guanaco Youtube Link
Fine Tuning Phi 1_5 with PEFT and QLoRA Youtube Link
Web scraping with Large Language Models (LLM)-AnthropicAI + LangChainAI Youtube Link

Fine-tuning LLM

Notebook Colab
๐Ÿ“Œ Finetune codellama-34B with QLoRA Open In Colab
๐Ÿ“Œ Mixtral Chatbot with Gradio
๐Ÿ“Œ togetherai api to run Mixtral Open In Colab
๐Ÿ“Œ Integrating TogetherAI with LangChain ๐Ÿฆ™ Open In Colab
๐Ÿ“Œ Mistral-7B-Instruct_GPTQ - Finetune on finance-alpaca dataset ๐Ÿฆ™ Open In Colab
๐Ÿ“Œ Mistral 7b FineTuning with DPO Direct_Preference_Optimization Open In Colab
๐Ÿ“Œ Finetune llama_2_GPTQ
๐Ÿ“Œ TinyLlama with Unsloth and_RoPE_Scaling dolly-15 dataset Open In Colab
๐Ÿ“Œ Tinyllama fine-tuning with Taylor_Swift Song lyrics Open In Colab

LLM Techniques and utils - Explained

LLM Concepts
๐Ÿ“Œ DPO (Direct Preference Optimization) training and its datasets
๐Ÿ“Œ 4-bit LLM Quantization with GPTQ
๐Ÿ“Œ Quantize with HF Transformers
๐Ÿ“Œ Understanding rank r in LoRA and related Matrix_Math
๐Ÿ“Œ Rotary Embeddings (RopE) is one of the Fundamental Building Blocks of LlaMA-2 Implementation

Other Smaller Language Models