2 reference locations of the model
atxcowboy opened this issue · 5 comments
Hello, just a heads up warning:
I think it's looking for the model in two different locations
- model_zoo/llama\7B\
- model_zoo\llama_7B_hf
If I copy the model to both locations the demo server comes up.
(ichat) E:\ai\InternGPT>python -u app.py --load "HuskyVQA_cuda:0,SegmentAnything_cuda:0,ImageOCRRecognition_cuda:0" --port 3456
[05/17 20:42:38] bark.generation WARNING: torch version does not support flash attention. You will get faster inference speed by upgrade torch to newest nightly version.
Initializing InternGPT, load_dict={'HuskyVQA': 'cuda:0', 'SegmentAnything': 'cuda:0', 'ImageOCRRecognition': 'cuda:0'}
Für das Windows-Subsystem für Linux wurden keine Distributionen installiert.
Distributionen zur Installation finden Sie im Microsoft Store:
https://aka.ms/wslstore
Traceback (most recent call last):
File "app.py", line 221, in <module>
bot = ConversationBot(load_dict=load_dict)
File "E:\ai\InternGPT\iGPT\controllers\ConversationBot.py", line 141, in __init__
self.models[class_name] = globals()[class_name](device=device)
File "E:\ai\InternGPT\iGPT\models\husky.py", line 368, in __init__
download_if_not_exists(base_path="model_zoo/llama",
File "E:\ai\InternGPT\iGPT\models\husky.py", line 351, in download_if_not_exists
write_model(
File "E:\ai\InternGPT\iGPT\models\husky_src\convert_llama_weights_to_hf.py", line 93, in write_model
params = read_json(os.path.join(input_base_path, "params.json"))
File "E:\ai\InternGPT\iGPT\models\husky_src\convert_llama_weights_to_hf.py", line 79, in read_json
with open(path, "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'model_zoo/llama\\7B\\params.json'
(ichat) E:\ai\InternGPT>python -u app.py --load "HuskyVQA_cuda:0,SegmentAnything_cuda:0,ImageOCRRecognition_cuda:0" --port 3456
[05/17 20:44:16] bark.generation WARNING: torch version does not support flash attention. You will get faster inference speed by upgrade torch to newest nightly version.
Initializing InternGPT, load_dict={'HuskyVQA': 'cuda:0', 'SegmentAnything': 'cuda:0', 'ImageOCRRecognition': 'cuda:0'}
Loading base model
Traceback (most recent call last):
File "app.py", line 221, in <module>
bot = ConversationBot(load_dict=load_dict)
File "E:\ai\InternGPT\iGPT\controllers\ConversationBot.py", line 141, in __init__
self.models[class_name] = globals()[class_name](device=device)
File "E:\ai\InternGPT\iGPT\models\husky.py", line 368, in __init__
download_if_not_exists(base_path="model_zoo/llama",
File "E:\ai\InternGPT\iGPT\models\husky.py", line 359, in download_if_not_exists
apply_delta(output_dir, new_path, delta_path)
File "E:\ai\InternGPT\iGPT\models\husky_src\load_ckpt.py", line 11, in apply_delta
base = AutoModelForCausalLM.from_pretrained(base_model_path, torch_dtype=torch.float16, low_cpu_mem_usage=True)
File "C:\Users\Sasch\.conda\envs\ichat\lib\site-packages\transformers\models\auto\auto_factory.py", line 441, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "C:\Users\Sasch\.conda\envs\ichat\lib\site-packages\transformers\models\auto\configuration_auto.py", line 916, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\Sasch\.conda\envs\ichat\lib\site-packages\transformers\configuration_utils.py", line 573, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Users\Sasch\.conda\envs\ichat\lib\site-packages\transformers\configuration_utils.py", line 628, in _get_config_dict
resolved_config_file = cached_file(
File "C:\Users\Sasch\.conda\envs\ichat\lib\site-packages\transformers\utils\hub.py", line 380, in cached_file
raise EnvironmentError(
OSError: model_zoo\llama_7B_hf does not appear to have a file named config.json. Checkout 'https://huggingface.co/model_zoo\llama_7B_hf/None' for available files.
Thanks for the feedback.
By default, we only need one of them to build the husky ckpt. Did you have those two llama folders created but without the actual ckpt?
In that case, you could remove the folders. Then rerun the code would automatically download the ckpt and perform the conversion.
Let me know if this is your case and if there are any issue remain.
Hi,@Zeqiang-Lai ,I have a similar problem, please help me solve it.
FileNotFoundError: [Errno 2] No such file or directory: 'model_zoo/llama/7B/params.json'
Besides, I found that in the script llama_download.sh, PRESIGNED_URL=""
is empty and the comment says replace with presigned url from email
, which url should I write for PRESIGNED_URL?
Looking forward to your reply, thanks!
@tinaYA524 Due to the license issue, we could not provide the original checkpoint of LLAMA. Therefore, you need to fill a form to request a url from facebook. Please refer to here for more instructions https://github.com/facebookresearch/llama
@tinaYA524 Due to the license issue, we could not provide the original checkpoint of LLAMA. Therefore, you need to fill a form to request a url from facebook. Please refer to here for more instructions https://github.com/facebookresearch/llama
Thanks for the quick reply, I will try to figure this out, thanks again!
@tinaYA524 Due to the license issue, we could not provide the original checkpoint of LLAMA. Therefore, you need to fill a form to request a url from facebook. Please refer to here for more instructions https://github.com/facebookresearch/llama
Thanks for the quick reply, I will try to figure this out, thanks again!
You are welcome. Thanks for the attention.