Issues
- 6
[doc] issues with rendering docs in FF
#1644 opened - 11
Support EETQ QLoRA
#1643 opened - 22
- 2
- 2
Error running the deepspeed qlora example
#1636 opened - 4
Prompt Tuning with Zero3 didn't work
#1633 opened - 2
Seeking Help/Question: Best practice for fine-tuning LLM with Lora but with addtional parameters
#1632 opened - 4
- 4
Documentation Clarification for loading PEFT models with AutoModelForCausalLM.from_pretrained
#1619 opened - 2
- 11
Support HQQ method.
#1616 opened - 14
- 4
- 2
- 2
- 1
Prefix tuning configuration issue
#1610 opened - 11
Configuration issue
#1608 opened - 4
- 2
Using LoRA on custom models
#1606 opened - 7
- 2
- 6
LISA
#1601 opened - 12
- 7
examples/sft/run_peft.sh model load dtype error
#1598 opened - 11
- 4
TypeError: ChatGLMForConditionalGeneration.forward() got an unexpected keyword argument 'decoder_input_ids'
#1596 opened - 4
Merging models and feature extraction
#1595 opened - 5
- 10
Getting Dora Model Is Very Slow
#1593 opened - 0
...
#1592 opened - 2
- 4
- 3
Error in LoraModel docstring
#1586 opened - 8
- 6
- 23
error merge_and_unload for adapter with a prefix
#1579 opened - 4
- 4
- 6
MPS: Cannot add LoRA to Unet (LoftQ)
#1575 opened - 11
'set_adapter()' throws "ValueError: Adapter not found in odict_keys" after 'load_adapter()'
#1574 opened - 3
PP or TP supported for multi-node training?
#1572 opened - 4
- 7
- 13
When peft>=0.7.0, fine-tuning ChatGLM3-6B causes the model to become dumb with a loss of 0
#1568 opened - 8
Base Model Revision
#1567 opened - 5
cannot load int8/4 model with deepspeed zero3
#1566 opened - 3
- 2
- 2
Add a new fine-tuning method called Conv-Lora
#1560 opened - 1