aws-neuron/transformers-neuronx

Avoid splitting Hugging Face Hub checkpoint files on disk

Closed this issue · 7 comments

In the curent version, transformers-neuronx models can only be instantiated from a directory where the Hugging Face checkpoint has been split into multiple files.

This raises two major issues:

  • first, this doubles the disk space requirements because the checkpoint is first downloaded from the Hugging face hub and stored under the ~/.cache directory, then serialized in multiple files into another directory,
  • second, this makes it very hard to upload the resulting neuron model to the hub because of the multiple files in the checkpoint directory. Not even mentioning the cost of uploading multiple files instead of one, users uploading several big models will quickly exhaust their quota.

Hello,
These are all good points and we have previously run into the exact issues you described. The initial API was intended to avoid out-of-memory issues we had been seeing with extremely large models. We intend to provide improved APIs in a future release (such as supporting the original huggingface checkpoints directly).

Is there a workaround for this? Trying to push_to_hub => getting rate limited (see huggingface/optimum-neuron#358).

Even just waiting and trying again later doesn't seem to work, all files seem to be uploaded again with new commits.
Of course, it would be nice on HF side to e.g. just do one commit instead of a gazillion but can't do much about it here...

In case anyone is wondering same as me above, here's a single commit alternative to upload the files:

from huggingface_hub import HfApi, HfFolder, snapshot_download

huggingface_token = HfFolder.get_token()

api = HfApi()

api.upload_folder(repo_id='my_repo_id',
                  folder_path="path_to_files",
                  token=huggingface_token,
                  multi_commits=False)

Are there any update on this issue ? In optimum-neuron, we now fetch and split the checkpoint on-demand, which removed the quota error.

However, the disk usage issue still remains, and is made even worse by the fact that the split weights are stored with full precision.

This means that models like Llama-70b require a humongous amount of disk just to be instantiated.

This is what the model should weight:

$ du -h ~/.cache/huggingface/hub/models--meta-llama--Llama-2-70b-hf/blobs/
129G    /home/ubuntu/.cache/huggingface/hub/models--meta-llama--Llama-2-70b-hf/blobs/

This is the extra weights induced by transformers_neuronx weight splitting:

$ du -h ./data/2.16.1/llama-2-70b-hf-1x2048x24/checkpoint/pytorch_model.bin/
257G    ./data/2.16.1/llama-2-70b-hf-1x2048x24/checkpoint/pytorch_model.bin/

With 2.18, we can load safetensors checkpoints directly, without the need to save split files. Please give it a try and let us know!. Please refer to https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/transformers-neuronx/transformers-neuronx-developer-guide.html#checkpoint-support-and-automatic-model-selection for more details.

Confirmed the issue is now closed.