OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
PythonApache-2.0
Pinned issues
Issues
- 2
[DPO is available?]
#741 opened - 6
- 1
Multiple rounds of training
#731 opened - 3
is support llava model ?
#730 opened - 5
- 17
- 3
- 8
Question
#714 opened - 1
- 3
How to verify the training is successful
#694 opened - 2
- 4
[BUG]Output my input
#692 opened - 1
- 2
finetune ChatGLM with lora[BUG]
#690 opened - 1
- 1
- 4
Minimum Requirement of GPU for Fine Tuning
#687 opened - 1
[BUG] main branch deepspeed error report
#686 opened - 1
the problem of scripts
#685 opened - 1
Distributed training parameter settings
#684 opened - 1
- 1
Training 70b model
#682 opened - 3
Experiments for speculative_decoding
#680 opened - 2
baichuan-inc_Baichuan2-7B-Chat can't training
#678 opened - 1
LoRA + FlashAttention2 speed up?
#677 opened - 1
How to fix the git clone
#676 opened - 4
[BUG] constant loss during fine-tuning
#675 opened - 3
- 1
- 2
- 3
A6000 support for FlashAttentionV2
#665 opened - 2
[BUG] Locally finetune failed
#664 opened - 1
- 5
[BUG]Model size change
#662 opened - 1
lora training - Input:Instruction
#661 opened - 3
Issues of local deployment
#660 opened - 1
Issue on default temperature settings
#659 opened - 9
- 3
Support Mistral 7B model
#652 opened - 8
- 1
- 0
- 4
- 7
module named 'lmflow.args'
#637 opened - 0
- 2
can't training codellama
#631 opened - 1
Fine-Tuning llama2-7b-chat model
#629 opened - 0
The Vocab size mismatch[BUG]
#627 opened - 4
About the eos token
#626 opened - 1
abnormal checkpoint
#623 opened