RuntimeError: Model ViT-L/14 not found
ss32 opened this issue · 6 comments
Running the default example_inference.py
results in a runtime error due to a specified model that does not exist. The full traceback is
Traceback (most recent call last):
File "example_inference.py", line 151, in <module>
inference(obj={})
File "/home/dev/.local/lib/python3.8/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/home/dev/.local/lib/python3.8/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/home/dev/.local/lib/python3.8/site-packages/click/core.py", line 1659, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/dev/.local/lib/python3.8/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/dev/.local/lib/python3.8/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/home/dev/.local/lib/python3.8/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "example_inference.py", line 43, in dream
dreamer: BasicInference = BasicInference.create(model_config, verbose=verbose)
File "/home/dev/deepLearning/dalle2-laion/dalle2_laion/scripts/InferenceScript.py", line 29, in create
model_manager = DalleModelManager(config)
File "/home/dev/deepLearning/dalle2-laion/dalle2_laion/dalle2_laion.py", line 103, in __init__
self.clip = model_load_config.clip.create()
File "/home/dev/.local/lib/python3.8/site-packages/dalle2_pytorch/train_configs.py", line 119, in create
return OpenAIClipAdapter(self.model)
File "/home/dev/.local/lib/python3.8/site-packages/dalle2_pytorch/dalle2_pytorch.py", line 281, in __init__
openai_clip, preprocess = clip.load(name)
File "/home/dev/.local/lib/python3.8/site-packages/clip_anytorch-2.2.1-py3.8.egg/clip/clip.py", line 115, in load
raise RuntimeError(f"Model {name} not found; available models = {available_models()}")
RuntimeError: Model ViT-L/14 not found; available models = ['RN50', 'RN101', 'RN50x4', 'RN50x16', 'ViT-B/32', 'ViT-B/16']
This is pulled directly from the Hugging Face repo, so that one likely needs to be corrected. I will update here if I figure out a workaround.
It appears that that model is configured in a few places
~/deepLearning/dalle2-laion$ grep -rnw "ViT-L/14"
models/prior_config.json:5: "model": "ViT-L/14"
models/second_decoder_config.json:41: "model": "ViT-L/14"
configs/gradio.example.json:46: "model": "ViT-L/14"
configs/upsampler.example.json:46: "model": "ViT-L/14"
configs/variation.example.json:33: "model": "ViT-L/14"
notebooks/dalle2_laion_alpha.ipynb:436: " clip=OpenAIClipAdapter(\"ViT-L/14\"),\n",
I’m not home right now so I can’t get on my laptop to look this up. What you’re seeing is the clip-anytorch package not finding a link to download the clip model. It’s not directly created by us so I can’t give a solution off the top of my head. It could have been removed from the registry of models, but that would be strange.
Ah, your clip-anytorch is out of date. You need to upgrade to clip-anytorch==2.4.0 to have ViT-L/14.
Yea, it looks like the version isn't specified in the main repo.
That should probably be added. I'll let lucid know.
THanks! Specifying the version fixed it.
Glad it's working!