thuhcsi/SECap

Llama is Frozen or tuneable?

Closed this issue · 0 comments

I read your paper and knew Llama is Frozen all the time.
However, it seems tuneable in the following code about Llama. So which is correct or is there something I did not notice?

SECap/model2.py

Line 160 in 705bd69

outputs=self.llama_model(

By the way, how about the code for the 1st stage of training? It seems not to be included in this project.