infly-ai/INF-MLLM

[Error] Running demo

Matesanz opened this issue ยท 11 comments

Description

First, thank you for such a great Open contribution. ๐Ÿ‘
Can't run the demo as stated in README.md. I get this error ๐Ÿ˜”

Traceback (most recent call last):
  File "/home/matesanz/projects/INF-MLLM/demo.py", line 129, in <module>
    main(args)
  File "/home/matesanz/projects/INF-MLLM/demo.py", line 82, in main
    tokenizer = AutoTokenizer.from_pretrained(args.model_path, use_fast=False)
  File "/home/matesanz/anaconda3/envs/infmllm/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 702, in from_pretrained
    return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
  File "/home/matesanz/anaconda3/envs/infmllm/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1841, in from_pretrained
    return cls._from_pretrained(
  File "/home/matesanz/anaconda3/envs/infmllm/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 2004, in _from_pretrained
    tokenizer = cls(*init_inputs, **init_kwargs)
  File "/home/matesanz/anaconda3/envs/infmllm/lib/python3.9/site-packages/transformers/models/llama/tokenization_llama.py", line 144, in __init__
    self.sp_model.Load(vocab_file)
  File "/home/matesanz/anaconda3/envs/infmllm/lib/python3.9/site-packages/sentencepiece/__init__.py", line 905, in Load
    return self.LoadFromFile(model_file)
  File "/home/matesanz/anaconda3/envs/infmllm/lib/python3.9/site-packages/sentencepiece/__init__.py", line 310, in LoadFromFile
    return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] 

How to reproduce

  1. Install dependencies as stated in README.md
  2. Download Model by git clone https://huggingface.co/mightyzau/InfMLLM_7B_Chat.git
  3. run: CUDA_VISIBLE_DEVICES=0 python demo.py

Related

I also tried CUDA_VISIBLE_DEVICES=0 python demo.py --model_path mightyzau/InfMLLM_7B_Chat

but got:

Traceback (most recent call last):
  File "/home/matesanz/projects/INF-MLLM/demo.py", line 129, in <module>
    main(args)
  File "/home/matesanz/projects/INF-MLLM/demo.py", line 83, in main
    model = AutoModel.from_pretrained(args.model_path, trust_remote_code=True, torch_dtype=torch.bfloat16)
  File "/home/matesanz/anaconda3/envs/infmllm/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 487, in from_pretrained
    cls.register(config.__class__, model_class, exist_ok=True)
  File "/home/matesanz/anaconda3/envs/infmllm/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 513, in register
    raise ValueError(
ValueError: The model class you are passing has a `config_class` attribute that is not consistent with the config class you passed (model has <class 'transformers.models.llama.configuration_llama.LlamaConfig'> and you passed <class 'transformers_modules.mightyzau.InfMLLM_7B_Chat.d3f0fe9071ccdd8dacb87b35af251a18ed9e7438.configuration_infmllm_chat.InfMLLMChatConfig'>. Fix one of those so they match!

It looks like a error caused by the wrong version of transformers. Please change the version of transformers to 4.31.0 and try again.

pip install transformers==4.31.0 sentencepiece==0.1.99

It looks like a error caused by the wrong version of transformers. Please change the version of transformers to 4.31.0 and try again.

pip install transformers==4.31.0 sentencepiece==0.1.99

Those are, indeed, the versions installed.

pip list:

Package            Version
------------------ ----------
Brotli             1.0.9
certifi            2023.11.17
cffi               1.16.0
charset-normalizer 2.0.4
cryptography       41.0.7
filelock           3.13.1
fsspec             2023.12.2
gmpy2              2.1.2
huggingface-hub    0.20.2
idna               3.4
Jinja2             3.1.2
MarkupSafe         2.1.3
mkl-fft            1.3.8
mkl-random         1.2.4
mkl-service        2.4.0
mpmath             1.3.0
networkx           3.1
numpy              1.26.3
packaging          23.2
Pillow             10.0.1
pip                23.3.1
pycparser          2.21
pyOpenSSL          23.2.0
PySocks            1.7.1
PyYAML             6.0.1
regex              2023.12.25
requests           2.31.0
safetensors        0.4.1
sentencepiece      0.1.99  ๐Ÿ‘ˆ
setuptools         68.2.2
sympy              1.12
timm               0.9.5
tokenizers         0.13.3
torch              2.1.0
torchaudio         2.1.0
torchvision        0.16.0
tqdm               4.66.1
transformers       4.31.0  ๐Ÿ‘ˆ
triton             2.1.0
typing_extensions  4.9.0
urllib3            1.26.18
wheel              0.41.2

That was very strange. I tried it again and your error did not appear. (I have encountered similar errors before, which was caused by the transformers version)

1

๐Ÿ˜„ Thank you, how did you clone the model?
Did you use git clone https://huggingface.co/mightyzau/InfMLLM_7B_Chat.git?
is that repo up-to-date?
Thanks! ๐Ÿ™‚

This repository is up to date. You can compare with the following md5sum results..

image

I tried on, literally, a fully fresh machine I rented in the cloud: NC8as T4 v3 16Gb memory + 8 vCPU, 56 GB RAM.

Should that be enough? ๐Ÿค”

I literally installed Anaconda from source and made:

git clone https://github.com/infly-ai/INF-MLLM.git
cd INF-MLLM/
git clone https://huggingface.co/mightyzau/InfMLLM_7B_Chat.git

conda create -n infmllm python=3.9
conda activate infmllm
conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple


CUDA_VISIBLE_DEVICES=0 python demo.py

Please check if you downloaded the InfMLLM-7B-Chat model correctly. If the download is successful, the execution should be line 485, not line 487 as in the error.

image

Yep, it was a problem regarding model download (LFS was not launching hooks for model download). ๐Ÿ‘
Thank you for your help!
Just curiosity, what hardware are you using for inference?

we train and test on the 80G A100 GPU.

Yep, it was a problem regarding model download (LFS was not launching hooks for model download). ๐Ÿ‘ Thank you for your help! Just curiosity, what hardware are you using for inference?

Can you share how you fixed it, i am facing the same errors as yours.

Yep, it was a problem regarding model download (LFS was not launching hooks for model download). ๐Ÿ‘ Thank you for your help! Just curiosity, what hardware are you using for inference?

Can you share how you fixed it, i am facing the same errors as yours.

I downloaded the files manually from huggingface instead of git clone https://huggingface.co/mightyzau/InfMLLM_7B_Chat.git