KimMeen/Time-LLM

Why Llama-7B?

Closed this issue · 1 comments

There are many large models, why did you choose Llama?

Thank you very much for your interest in our work. Currently, I have generalized Time-LLM into a universal reprogramming alignment framework capable of adapting to any large-scale model. The selection of llama-7b at the time was primarily due to its exceptional performance, making it an ideal choice to serve as a preliminary demonstration. Researchers are encouraged to explore and experiment with other state-of-the-art large models as well.