Issues
- 0
support llama 3 models
#34 opened by chauncygu - 1
How to fix unstable loss. I am using wizardlm or Llama-X training code with vicuna style chat format for fine-tuning Llama-2-7b-hf model.
#32 opened by apt-team-018 - 1
- 2
- 0
About llama-2-70B fine-tuning
#31 opened by RickMeow - 0
Have you ever tried PEFT on LLaMA?
#29 opened by ZetangForward - 0
- 6
how big is your RAM
#15 opened by Sorezza - 0
there is a endless loop in convert_tokens_to_ids(self.unk_token) and self._convert_token_to_id_with_added_voc(tokens)
#23 opened by 2018211801 - 7
About the training strategy
#3 opened by SparkJiao - 1
good in llama-7b,but bad in llama-13B ?
#22 opened by huiyangzhou - 0
About Llama-X and Alpaca repo
#20 opened by haorannlp - 0
- 0
extremely bad performance
#17 opened by 152334H - 0
Demo is dead
#16 opened by fredi-python - 1
improve LLaMA for visual understanding like GPT-4
#13 opened by feizc - 3
RuntimeError: CUDA out of memory.
#12 opened by OpenSource-fan - 1
Why use offload_param in CPU?
#9 opened by xesdiny - 0
Need optimization for mps
#10 opened by yxKryptonite - 3
Concern on the language
#2 opened by zhhongzhi - 2