DAMO-NLP-SG/Video-LLaMA

modelling_llama.py

zeroQiaoba opened this issue · 1 comments

In Video-LLaMA, we notice that you load LlamaForCausalLM from ./models/modelling_llama.py. I wonder why not directly load it by "from transformers import LlamaForCausalLM". Do you make any change of the original code in Transforms packages?

I was wondering this same thing, any clarification here?