Issues
- 0
Prefix Tuning dimension error with Qwen2 and missing vocab_size for PaliGemma2
#2315 opened by Florian-Dreyer - 2
The provided `peft_type` 'PROMPT_TUNING' is not compatible with the `PeftMixedModel`.
#2307 opened by Radu1999 - 3
- 10
- 19
- 3
- 3
- 5
- 3
Documentation for LoRAConfig.
#2212 opened by brynhayder - 18
Request for adding the lora implementation for Conv1d rather than transormers.utils.Conv1d
#2241 opened by HelloWorldLTY - 4
Adapter name conflict with tuner prefix leads to unclear warning during model loading
#2252 opened by pzdkn - 5
Guidance Needed on Two-Stage Fine-Tuning with LoRA(SFT and DPO) for Model Adaptation
#2264 opened by none0663 - 1
How to pass in an attention _ mask that is one dimension more than input _ ids
#2301 opened by Chinesehou97 - 2
- 4
Deepseek_lora custom keys in input_data fails
#2259 opened by Zhaoyi-Yan - 3
Is it possible to support the transformer engine when using Lora in Megatron?
#2260 opened by liulong11 - 3
Error of load_adapter of Target module is not supported when using Qwen2-VL
#2296 opened by bigmouthbabyguo-530 - 27
TypeError: LoraConfig.__init__() got an unexpected keyword argument 'exclude_modules'
#2208 opened by imrankh46 - 2
Qdora support
#2298 opened by imrankh46 - 16
frozen modules also be lora
#2250 opened by onehaitao - 8
TypeError: LoraConfig.__init__() got an unexpected keyword argument 'eva_config'
#2275 opened by Mohankrish08 - 3
- 5
Is it possible to add LoRA on specific head?
#2293 opened by SpeeeedLee - 1
- 6
ValueError: Target module Dropout(p=0.05, inplace=False) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv2d`, `transformers.pytorch_utils.Conv1D`.
#2286 opened by gyuilLim - 8
TypeError: TorchaoLoraLinear.__init__() missing 1 required keyword-only argument: 'get_apply_tensor_subclass'
#2285 opened by spezialspezial - 8
- 1
Question about use lora for mamba2
#2274 opened by Doctor-James - 3
- 4
Make module imports / re-export conforming with typing specs for proper type checker support
#2261 opened by bluenote10 - 3
Request to intergrate Monarch-based PEFT (MoRe)
#2277 opened by Edenzzzz - 1
Adding Dynamic Low-Rank Adaptation (DoRA ACL2024)
#2278 opened by dohuyduc2002 - 1
- 7
Problem with loading PEFT model from checkpoint
#2256 opened by sno-ko - 1
Support for Custom Adapters
#2273 opened by dgme-syz - 11
Different Results When Predicting with Multiple LoRA Adapters in a Loop VS. Using only One LoRA
#2270 opened by beyondguo - 18
Bug: BOFT forward/merging with CUDA
#2219 opened by BenjaminBossan - 6
TypeError: LlamaModel.forward() got an unexpected keyword argument 'labels' when using LoRA on Llama 3.2
#2243 opened by hoang1645 - 2
active_adapter = model.active_adapters[0] TypeError: 'method' object is not subscriptable
#2249 opened by MAXNORM8650 - 5
Is this the right way to check whether a model has been trained as expected?
#2255 opened by qgallouedec - 5
Lm_head layer Problem in gemma2 : 2b-it
#2244 opened by OmarHelwe10 - 2
a guide to add a new fine-tuning method in the doc
#2251 opened by YF-T - 2
PeftModelForSequenceClassification.add_adapter() got an unexpected keyword argument 'low_cpu_mem_usage'
#2246 opened by TristanDonze - 3
LoraConfig not JSON serializable for logging to wandb
#2239 opened by v-bosch - 19
训练时使用的QLoRA 4rank,进行cuda模型合并导出时出现,KeyError: 'base_model.model.model.model.layers.14.mlp.down_proj'
#2213 opened by xiaoheiyue - 2
- 3
- 1
- 14
- 2