Issues
- 14
训练时使用的QLoRA 4rank,进行cuda模型合并导出时出现,KeyError: 'base_model.model.model.model.layers.14.mlp.down_proj'
#2213 opened by xiaoheiyue - 3
- 7
- 1
Bug: BOFT forward/merging with CUDA
#2219 opened by BenjaminBossan - 6
KeyError: Parameter containing
#2205 opened by Amerehei - 1
- 9
KeyError: 'messages'
#2204 opened by rickeyhhh - 5
Prompt Tuning Crash with Llama-3.2 in torch.embedding
#2161 opened by hrsmanian - 7
update `layers_pattern` logic to to accept layer index
#2165 opened by fcakyon - 4
- 5
support for heterogeneous types for `modules_to_save`
#2136 opened by saeid93 - 5
about run_unsloth_peft.sh
#2152 opened by opentld - 14
- 1
Documentation for LoRAConfig.
#2212 opened by brynhayder - 25
TypeError: LoraConfig.__init__() got an unexpected keyword argument 'exclude_modules'
#2208 opened by imrankh46 - 2
- 2
`PeftModelForCausalLM.generate` ignores prompt tuning parameters unless `use_cache=False`
#2123 opened by mattlgarber - 4
RuntimeError: element 0 of tensors.. OpenCLIP model
#2200 opened by EngEmmanuel - 1
Add Assertions for `task_type` in `LoraConfig`
#2203 opened by d-kleine - 14
could not finetune gemma 2 9b with lora and fsdp
#2111 opened by imadoualid - 7
- 3
Memory Inefficiency for LoRA & DoRA during fine-tuning.
#2196 opened by gslama12 - 2
[BUG] Issue with using `rank_pattern` and `alpha_pattern` together in `LoraConfig`
#2194 opened by sirluk - 9
- 2
”peft_prefix_tuning_seq2seq.ipynb“RuntimeError Due to Tensor Dimension Mismatch
#2192 opened by 1hb6s7t - 7
merge_and_unload docs do not clarify behaviour for quantized base models
#2105 opened by RonanKMcGovern - 4
Ineffective Fine-Tuning Bug: Using `get_peft_model()` Before Loading LoRA Produces Outputs Identical to the Base Model
#2115 opened by Hoper-J - 1
How to change 'modules_to_save' setting when reloading a lora finetuned model
#2188 opened by dengchengxifrank - 2
PiSSA Updates Base Model Weights Silently
#2184 opened by njbrake - 11
Integration of merge-kit into PEFT
#2179 opened by ParagEkbote - 4
Evaluation of peft models using lm-eval-harness
#2182 opened by JINO-ROHIT - 2
Tensor Expansion Size Mismatch During Forward Pass
#2154 opened by VecherVhatuX - 0
Xlora cannot reload model from last checkpoint by using trainer.train(resume_from_checkpoint="checkpp")
#2185 opened by SongHanKen - 4
- 4
Lora PISSA init: not support gpt2
#2103 opened by suyang160 - 0
How can I do to export mode format as gguf
#2181 opened by xu756 - 12
- 5
- 0
fsdp_auto_wrap_policy is not working when FSDP_TRANSFORMER_CLS_TO_WRAP and the model's _no_split_modules are None.
#2166 opened by eljandoubi - 6
Request to Include Named Entity Recognition and Relation Extraction Model Finetuning Examples and Guidance Request
#2119 opened by HarikrishnanK9 - 5
Is it possible to support the new Bone method?
#2138 opened by JL-er - 2
Optimize DoRA computation when there is no dropout
#2107 opened by BenjaminBossan - 3
LoKrModel doesn't support LLM Model
#2147 opened by WhuanY - 8
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
#2129 opened by brunopistone - 2
SFTTrainer: element 0 of tensors does not require grad and does not have a grad_fn
#2125 opened by brunopistone - 5
p-tuning or prefix tuning or prompt tuning code given does not work during inference
#2131 opened by chandar-l - 4
ValueError: Please specify `target_modules` in `peft_config`, issue exist with gemma
#2128 opened by yananchen1989 - 1
Peft should update the way of passing past_key_values of prefix-tuning, prompt tuning etc
#2121 opened by Kami-chanw - 4
PEFT Config checking update request
#2112 opened by lemingshen - 1
Update `huggingface_hub` requirement version
#2116 opened by fmartiescofet