Issues
- 0
想学习【MLLM多模态】快加学习交流群
#19 opened by km1994 - 0
Pretraining with im_start_end token
#18 opened by Ali2500 - 1
Is this still active?
#17 opened by chris-hoertnagl - 2
loss curve of llava-next-llama3
#12 opened by simplelifetime - 1
Inverse Loss Spike issue
#16 opened by LaBaZh - 3
Would you plan to adapt it to qwen2-7B?
#13 opened by Nastu-Ho - 1
Flashattention issue
#15 opened by LaBaZh - 1
srun: unrecognized option '--quotatype=reserved'
#14 opened by LaBaZh - 6
loss curve of SFT on vicuna-7b
#9 opened by Xiaohui9607 - 1
llama3 finetune time
#11 opened by Xiaohui9607 - 1
Vision Tower
#10 opened by homiec - 0
anyres in open-llava-next v.s. s2 in llava
#8 opened by LIO-H-ZEN - 0
About the MMMU performance
#7 opened by LIO-H-ZEN - 1
- 1
data generation code
#4 opened by trinhvg - 2
vit not saved
#5 opened by RifleZhang - 1
does it support llava-next-video yet?
#3 opened by dragen1860 - 1
[Question] About finetuning projector
#2 opened by JY-CCK