Issues
- 2
- 0
- 3
How to adjust LoRA into nn.ConvTranspose2d?
#160 opened by vanmeruso - 1
Cannot implement LoRA on a custom model containing transformer encoder from pytorch
#161 opened by wsuSaiman - 7
[FATAL BUG REPORT] with torch.no_grad(): can not close the gradient of lora matrix
#185 opened by bjzhb666 - 0
Understanding Figure 3
#184 opened by Davidyao99 - 9
LoRA adapter checkpoints not downloadable
#141 opened by kibru9399 - 3
Reproduce Lora results is close but not accurate
#165 opened by harsh306 - 1
Parameter count on GPT-2 medium
#172 opened by Heimine - 1
- 1
NLU experiments error
#180 opened by jie040109 - 0
Questions about replicating NLU experiments
#177 opened by lky-violet - 2
[Question about multi-gpu training]
#170 opened by FindTheTruth - 1
Questions about running the cola dataset script
#176 opened by dengxingzhi - 5
- 0
Where is the LoRA matrices saved?
#173 opened by KelsenMa - 0
question for scale!
#171 opened by Duperr - 4
- 2
Can not reproduce the result of Roberta-Base
#151 opened by Ther-nullptr - 0
LORA on T5 model
#169 opened by vivektreddy - 4
Cannot use lora for a pre-trained model
#123 opened by HelloWorldLTY - 0
lora-dim == lora-r ?
#168 opened by ardand4708 - 0
- 0
- 1
Dynamic Lora Selection In Runtime❓
#164 opened by teneous - 0
_conv_forward() error
#162 opened by likaiucas - 3
Is the output the entire model?
#139 opened by licy02 - 1
Layers.py not being executed
#150 opened by Aradhye2002 - 4
Conv1d and Conv3d are not working
#115 opened by gau-nernst - 1
Question about the test set of the GLUE benchmark
#145 opened by James6Chou - 0
Is it necessary to add `model = model.merge_and_unload()` when training a new LoRA adapter?
#156 opened by 4daJKong - 3
- 0
- 0
How to compute that GPT-2 M (FTTop2 ) trainable parameters number is 25.19M?
#148 opened by floatingbigcat - 0
[Minor] Possible typos in weight initialization
#146 opened by awgu - 0
Question about seed numbers.
#144 opened by SEONHOK - 3
- 2
The description and the behavior don't match
#142 opened by yongchanghao - 1
Question about reproducing RoBERTa base Fine-tune
#127 opened by tlsdbfk - 0
Support multi-lora fine tune in the same GPU
#136 opened by merlintang - 0
T(w) problem
#128 opened by fclearner - 2
- 0
Some questions about LoRA for pre-trained model
#124 opened by iopwsy - 0
Using gradient checkpoint with LoRA
#120 opened by dudskrk - 3
The content on pypi does not seem to be updated
#117 opened by MikuAndRabbit - 1
matmul ordering in MergedLinear
#109 opened by zhiqi-0 - 0
fine tuning RoBERTa-base with LoRA (ValueError: Classification metrics can't handle a mix of binary and multilabel-indicator targets)
#116 opened by rozhix - 1
- 3
Embedding reset_parameters() implement wrong
#114 opened by chenjiasheng - 1