[🐛BUG] The bug of T5 using prefix-tuning
Closed this issue · 1 comments
luoyetingqiu commented
Describe the bug
AttributeError: 'GPT2LMHeadModel' object has no attribute 'set_efficient_tuning'
When I use efficient_methods:
python run_textbox.py
--model=T5
--model_path=t5-large
--dataset=webnlg
--gpu_id=4
--efficient_methods=['prefix-tuning']
--efficient_kwargs={'prefix_length':\ 100,\ 'prefix_dropout':\ 0.1,\ 'prefix_mid_dim':\ 512}
--filename CP/T5_large_prefix_tuning
StevenTang1998 commented
You need to install our Transformers following the instruction.
Or using pip install git+https://github.com/RUCAIBox/transformers.git