ImportError: cannot import name 'checkpoint' from 'transformers.models.t5.modeling_t5'
Dylan-LDR opened this issue ยท 15 comments
Thanks for your great work. I successfully install vima and vima_bench. However, when I try to run the example by '''python3 scripts/example.py --ckpt=../VimaBench/ckpts/200M.ckpt --partition=placement_generalization --task=follow_order''', I face the problem of failure to import checkpoint when import modeling_t5 at vima/nn/prompt_encoder/prompt_encoder.py. Did I miss some requirements and how can I fix this?
I did check the source code and document of Transformers modeling_t5 but I could not find checkpoint definition.
same question +1 @yunfanjiang
Thanks for contributing such inspiring work. I guess this problem might also related to HF version.
Yes, I think the problem should lay in the version conflict of transformers. Maybe '''checkpoint''' is removed after a certain update so that I cannot find it in my installed transformers 4.36.1. Unfortunately I spot that there are no detailed versions of packages in requirement and it is hard to find supports from the community. Hope there could be more explanations and discussions.
Yeah, the transformers version seems to be the issue. Version 4.20.0 worked for me.
pip install transformers==4.20.0
Yes, I think the problem should lay in the version conflict of transformers. Maybe '''checkpoint''' is removed after a certain update so that I cannot find it in my installed transformers 4.36.1. Unfortunately I spot that there are no detailed versions of packages in requirement and it is hard to find supports from the community. Hope there could be more explanations and discussions.
Thanks for your information, it's helpful.
Yeah, the transformers version seems to be the issue. Version 4.20.0 worked for me.
pip install transformers==4.20.0
Thanks, transformers==4.20.0
, this version is helpful, however, then I encountered a new problem "[2023-12-19T02:58:39Z ERROR cached_path::cache] Max retries exceeded for https://huggingface.co/t5-base/resolve/main/tokenizer.json
Traceback (most recent call last):
File "/home/jmw/VIMA-main/scripts/example.py", line 74, in
tokenizer = Tokenizer.from_pretrained("t5-base")
Exception: Model "t5-base" on the Hub doesn't have a tokenizer"
#20 (comment)
This reply works for me.
#20 (comment) This reply works for me.
Hello, thanks for your information.
Can you give more details about this, I tried this, and I encounter with
#20 (comment) This reply works for me.
Hello, thanks for your information. Can you give more details about this, I tried this, and I encounter with
I just follow the instruction of #20 (comment) and load the tokenizer from local directory. It works fine for me totally. I guess the assertion is caused by the miss of VIMA model checkpoint. Did you ever download the checkpoint file 200M.ckpt from HF and place it into your code directory? I cannot find it from the file structure you posted.
#20 (comment) This reply works for me.
Hello, thanks for your information. Can you give more details about this, I tried this, and I encounter with
I just follow the instruction of #20 (comment) and load the tokenizer from local directory. It works fine for me totally. I guess the assertion is caused by the miss of VIMA model checkpoint. Did you ever download the checkpoint file 200M.ckpt from HF and place it into your code directory? I cannot find it from the file structure you posted.
Many thanks, it's working after following your suggestion. ๐๐ฑ
Downgrading to version pip install transformers==4.34.1
did the trick for me.