download and convert the 7B1 model to ggml FP16 format fails!
distalx opened this issue · 0 comments
distalx commented
As describe into readme file, when I try to run the convert-hf-to-ggml.py
script I'm getting the following error.
Loading model: bigscience/bloomz-7b1
pytorch_model.bin: 68%|████████████████████████████████████████████████████████████████████████████████████▎ | 9.62G/14.1G [16:33<07:47, 9.68MB/s]
Traceback (most recent call last):
File "/home/e/Downloads/bloomz.cpp/convert-hf-to-ggml.py", line 84, in <module>
model = AutoModelForCausalLM.from_pretrained(model_name, config=config, torch_dtype=torch.float16 if ftype == 1 else torch.float32, low_cpu_mem_usage=True)
File "/home/e/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained
return model_class.from_pretrained(
File "/home/e/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3268, in from_pretrained
resolved_archive_file = cached_file(
File "/home/e/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 389, in cached_file
resolved_file = hf_hub_download(
File "/home/e/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/e/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1461, in hf_hub_download
http_get(
File "/home/e/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 569, in http_get
raise EnvironmentError(
OSError: Consistency check failed: file should be of size 14138162687 but has size 9616967235 (pytorch_model.bin).
We are sorry for the inconvenience. Please retry download and pass `force_download=True, resume_download=False` as argument.
If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub.
Loading model: bigscience/bloomz-7b1
pytorch_model.bin: 75%|████████████████████████████████████████████████████████████████████████████████████████████▋ | 10.6G/14.1G [17:45<05:59, 9.92MB/s]
Traceback (most recent call last):
File "/home/e/Downloads/bloomz.cpp/convert-hf-to-ggml.py", line 84, in <module>
model = AutoModelForCausalLM.from_pretrained(model_name, config=config, torch_dtype=torch.float16 if ftype == 1 else torch.float32, low_cpu_mem_usage=False)
File "/home/e/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained
return model_class.from_pretrained(
File "/home/e/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3268, in from_pretrained
resolved_archive_file = cached_file(
File "/home/e/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 389, in cached_file
resolved_file = hf_hub_download(
File "/home/e/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/e/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1461, in hf_hub_download
http_get(
File "/home/e/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 569, in http_get
raise EnvironmentError(
OSError: Consistency check failed: file should be of size 14138162687 but has size 10568333935 (pytorch_model.bin).
We are sorry for the inconvenience. Please retry download and pass `force_download=True, resume_download=False` as argument.
If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub.
I'm running : python3 convert-hf-to-ggml.py bigscience/bloomz-7b1 ./models
I believe this fails for some reason.
AutoModelForCausalLM.from_pretrained(model_name, config=config, torch_dtype=torch.float16 if ftype == 1 else torch.float32, low_cpu_mem_usage=True)
I do have enough disk space so I'm not sure why it downloading fails around 10 Gb. Also, my .cache
directory has both unfinished files.
My operating system is Ubuntu 22.04
.