meta-llama/llama-recipes
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama for WhatsApp & Messenger.
Jupyter NotebookNOASSERTION
Issues
- 0
- 0
Bug in recipes/quickstart/inference/local_inference/multi_modal_infer.py Results in "end_header_id|>" Preceding Generated Text
#826 opened by eii-lyl - 0
custom function name for "get_custom_dataset" also updates the name of the function called to retreive a custom data loader.
#828 opened by yaoshiang - 0
- 2
- 1
- 2
How can I construct dataset for llama vision model
#821 opened by blurmemo - 1
Protect our copyrighted Intellectual Property
#819 opened by anderaiprosicks - 0
continue pre-training Example
#820 opened by keyuchen21 - 1
- 1
Can the meta-llama/Llama-3.2-1B-Instruct model be used for visual fine-tuning?
#818 opened by Sunstroperao - 1
The Parameter in llama-recipes/src/llama_recipes/configs/peft.py seems not be used during finetuning.
#814 opened by xuqianmamba - 0
Batch Inference with Llama 3.2 Generate Function: Only the First Result is Correct
#816 opened by smile-struggler - 0
- 0
- 0
Missing `input_ids` Error When Going through Llama 3.2 Vision Models Fine-Tuning Recipe
#812 opened by clankur - 1
Expedited Access Request Llama2 (HF)
#765 opened by farris - 1
Peraturan Menteri Komunikasi dan Informatika Nomor 28 Tahun 2013 tentang Tata Cara dan Persyaratan Perizinan Penyelenggaraan Penyiaran Jasa Penyiaran Televisi Secara Digital Melalui Sistem Terestrial
#811 opened by Marine378 - 3
- 1
Adding a Vision RAG Notebook to Llama Recipes
#781 opened by adithya-s-k - 6
GPU memory allocated increase during finetuning
#792 opened by rong-hash - 3
TemplateError: Prompting with images is incompatible with system messages.
#774 opened by hessaAlawwad - 7
Is it possible to load a local dataset to use it as the custom dataset for finetuning?
#802 opened by Bleking - 3
- 3
Add eval code for LLaMA 3.2 text model
#732 opened by LeoXinhaoLee - 9
- 6
Custom dataset
#784 opened by Amerehei - 2
- 0
Llama 3.2 Vision Models Fine-Tuning Recipe
#770 opened by JimChienTW - 3
[Solved] [Llama-11B-Vision] [Lora Finetune] [PEFT] `IndexError: list index out of range` when saving checkpoints
#780 opened by yifan-gao-dev - 2
finetuning using notebook on custom dataset
#788 opened by amoghskanda - 2
- 6
Long context
#785 opened by Amerehei - 2
Mardep UKHO
#779 opened by Marine378 - 1
J
#776 opened by novocopilo - 0
The notebook titled "Fine-tuning with Multi GPU" does not seem to generate the dataset correctly. Possible get_dataloader bug.
#777 opened by saocorley - 5
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! error for chatcompletion.py with llama 3.2 instruct model
#771 opened by Emersonksc - 4
Adding new vocab doesn't saved the model
#773 opened by andymvp2018 - 1
- 0
- 3
Learning rate scheduler
#738 opened by NicoZenith - 2
- 5
Not able to save trained model
#752 opened by grvsh02 - 1
Found two forward recomputation exist in a single backward when using FSDP with activation checkpointing
#740 opened by mingyuanw-mt - 1
Project license?
#749 opened by notpushkin - 2
Project summary should say what it does
#747 opened by yavin5 - 5
Issue with quickstart_peft_finetuning.ipynb
#730 opened by jihao2021 - 2
- 5
Checkpoint feature via steps instead of epoch
#724 opened by mylesgoose - 1
Feature Request 「plz support InternLM2.5」
#721 opened by boshallen