Issues
- 0
- 0
Repeating tokens in optimized prompt
#45 opened by AMJasser - 1
seems have a bug in evaluate function
#42 opened by A11en0 - 3
- 1
- 0
Questions on the Gradients of LLM
#41 opened by Schwartz-Zha - 0
Baselines results
#40 opened by Davido111200 - 2
BrokenPipeError: [Errno 32] Broken pipe
#37 opened by Xinhui-Zhu - 1
Reproducibility and randomness
#38 opened by YasamanJafari - 2
Why does this method need so much steps?
#39 opened by A11en0 - 3
About the RL training
#34 opened by FayeXXX - 3
About the prepended special character \u0120.
#35 opened by guozix - 1
Network is unreachable
#32 opened by rabi-fei - 1
Train using vertexai
#33 opened by yguezpa - 7
A question about ppl score
#29 opened by FayeXXX - 2
question
#31 opened by 18712234451 - 2
- 1
Clarification on the RL problem
#28 opened by hv68 - 2
classification with gpt & training time
#27 opened by MatthewCYM - 2
classifcation with gpt
#26 opened by MatthewCYM - 1
RL-prompt MLP loss
#25 opened by hv68 - 2
Scope of this project
#24 opened by YujingYang666777 - 7
- 3
A question about prompt initialization
#22 opened by beeevita - 3
A question about how to judge the performance of the prompts after running "run_fsc.py" file?
#12 opened by jasonyin718 - 2
ImportError
#21 opened by beeevita - 4
output data of your experiment
#18 opened by li-jing-wen - 1
- 4
- 3
Transferring Prompts across LMs
#13 opened by 52ie - 2
some Doubts about a symbol
#10 opened by oujieww - 5
- 2
RuntimeError
#8 opened by Ericmututu - 4
- 4
- 1
- 1