vision_tower how to insert LLama?
Closed this issue · 3 comments
paulpaul91 commented
https://github.com/haotian-liu/LLaVA/blob/main/llava/train/train.py#L427
origin LLama not include vision tower, you may change the transformer code, but not public?
WooKimm commented
I just had this question, maybe we just ignored the notice mentioned in the readme:
NOTE: In this research preview, we used a modified version of huggingface/transformers library to support multimodal models and the LLaMA tokenizer.
Make sure that you are using the correct transformers library from https://github.com/haotian-liu/transformers_llava.
paulpaul91 commented
I just had this question, maybe we just ignored the notice mentioned in the readme:
NOTE: In this research preview, we used a modified version of huggingface/transformers library to support multimodal models and the LLaMA tokenizer. Make sure that you are using the correct transformers library from https://github.com/haotian-liu/transformers_llava.
thx
haotian-liu commented
@WooKimm Thanks for sharing this!
Yes, please make sure to install the correct transformers library from https://github.com/haotian-liu/transformers_llava.