InftyAI/llmlite

Support loading different adapter

kerthcet opened this issue · 0 comments

Generally the API looks like

chat = ChatLLM(
    model_name_or_path="meta-llama/Llama-2-7b-chat-hf", # required
    task="text-generation",
    adapter=<path/to/adapter>, # optional
    )