ValueError: Trying to set a tensor of shape torch.Size([176128, 32]) in "trellis" (which has shape torch.Size([5636096])), this looks incorrect.
DmitryRedko opened this issue · 8 comments
I am receiving a warning that NVML cannot be initialized, followed by a ValueError when loading a model from Hugging Face. The error message indicates a mismatch in tensor dimensions.
Steps to Reproduce:
Just run the eval_zeroshot.py script with the --hf_path argument pointing to the Hugging Face model path relaxml/Llama-2-7b-QTIP-2Bit.
The requirements are compatible with your requirements.txt file.
Please let me know if you need any further information.
BTW: I tested the quip-sharp package, and it worked without any errors.
I solved the problem with NVML. The problem with the dimension remains
There was a bug in one of the commits with the saved tensor shape. I think I fixed it a few weeks ago - try pulling the latest repo. If that doesn't work, I will get around to fixing it in a week or two. The issue is just that some of the models have the trellis saved as a 2D tensor and others have it saved flattened. In the meantime, you can modify this line to be 1D or 2D to patch the problem.
If you apply flatten to trellis and resave the model, will it solve the problem of tensor dimension mismatch when loading the model from Hugging Face? Or is there some kind of mask that determines how the tensor should be reshaped?
After performing such manipulation, the model loaded and inference started, but I am not sure if all the trellis weights are now in their correct places.
https://huggingface.co/relaxml/Llama-2-7b-QTIP-2Bit
This one. By the way, can I get a fine-tuned version from somewhere?
And also, can I get a fine-tuned version of QUIP# from somewhere?
The instruct tuned versions should be on huggingface as well. The QuIP# version is here https://huggingface.co/relaxml/Llama-2-7b-chat-E8P-2Bit and the QTIP version is here https://huggingface.co/relaxml/Llama-2-7b-chat-QTIP-2Bit.
Everything seems to be working fine on my end. Let me know if you are still running into issues with the HF models.