Issues
- 3
Unable to use GPUs 4,5,6,7 for training
#112 opened by shashwat14 - 1
- 1
About "conv_version" and PretrainTemplate .
#101 opened by NyKxo1 - 1
qformer
#110 opened by PangziZhang523 - 1
About the Vision Tower.
#111 opened by Sootung - 2
Finetuning get: ValueError: not enough values to unpack (expected 2, got 0) in get_modality_length_grouped_indices
#109 opened by sc268 - 8
- 1
About share recipe. Should change "tune_type_vision_tower" to set to "partially-tune"?
#102 opened by NyKxo1 - 1
Does the Phi template work for Phi3?
#103 opened by NyKxo1 - 0
Add '--raise_error_at_min_scale False' flag in pretrain and finetune scripts to avoid minimum loss scale crash caused by bad batch.
#107 opened by RobotiX101 - 0
在使用hf上下载TinyLLaVA-Phi-2-SigLIP-3.1B模型时 ,参数设置。
#105 opened by eva10084 - 1
replace llm
#95 opened by LilDevsy0117 - 2
- 0
- 2
template differences?
#77 opened by TuuSiwei - 2
Exception: Current Loss Scale Already at Minimum While Fine-Tuning with 8 A800-80G GPUs
#99 opened by codefanw - 4
- 1
distributed computing
#93 opened by 1764758458 - 1
mof_mlp error
#94 opened by Daming-W - 6
require_grad
#81 opened by 1764758458 - 0
Results are not reproducible on Qwen1.5-1.8B
#92 opened by Fantasy1120 - 1
dataset seems imcomplete
#91 opened by BroJunn - 3
- 5
Missing key "lm_head.weight" in GemmaForCausalLM when loading lora finetuned TinyLLaVA-Gemma-SigLIP-2.4B
#88 opened by Yuki-Kokomi - 2
- 0
Create a tinyllava with qwen2-0.5B-instruct
#87 opened by bil-ash - 1
demo link is down.
#85 opened by cooleel - 1
Questions about LLM templates
#83 opened by ShawnAn-WHU - 3
Reproduced results
#82 opened by Fantasy1120 - 2
- 6
--fp16 True question
#78 opened by Liavan0122 - 1
share training recipe
#80 opened by Fantasy1120 - 2
Data about evaluation
#75 opened by Fantasy1120 - 3
Implementing RLHF trainer ?
#74 opened by R3xpook - 3
Question about 'fix everything'
#71 opened by Zeqing-Wang - 1
- 3
Mistake in eval_textvqa
#72 opened by hedes1992 - 1
- 1
Replace vision tower with DINOv2
#68 opened by Daming-W - 2
consuming time for each VLM pretrain/finetune
#67 opened by vision-mini - 8
Fine-tuning TinyLLaVA-Phi-2-SigLIP-3.1B
#62 opened by saadi297 - 3
- 2
MM-Vet download link broken
#66 opened by ryan-caesar-ramos - 5
- 1
- 3
- 0
add phi3
#59 opened by Jayantverma2 - 0
question about custom fine-tuning
#58 opened by josephtey - 0
Missing `llavabench.sh`
#57 opened by cooleel - 2
Lora finetune results drop dramatically
#56 opened by YFCYFC