Result is different from the paper for the mosi dataset
utility-aagrawal opened this issue · 7 comments
Hi @Clement25 ,
I trained the model on the mosi dataset and got the following numbers:
They look different than ones reported in the paper:
Would you know what could have caused these differences? Thanks for your help!
I've encountered a similar issue, although my training was conducted on the Mosei database. In my case, the test set loss reached its minimum around the second epoch, but the training set loss stopped decreasing around the tenth epoch. However, ACC-2 and ACC-7 both seem to be two points smaller than the results in the paper. And MAE is also two points higher than the results in the paper.I was wondering if you managed to resolve this issue and would greatly appreciate your insights.Thank you very much for your assistance.
@sprog1, I have not been able to resolve it and am still waiting to hear from the authors.
Hi, @utility-aagrawal and @sprog1 . Could you please try the following value settings and see how it works? Since from my experiment record they show similar good performance as the default settings.
lr_main=2e-4, lr_mmllb=5e-3, alpha=0.1, beta=0.3
lr_main=1e-3, lr_mmllb=5e-3, alpha=0.3, beta=0.1,
lr_main=1e-3, lr_mmllb=5e-3, alpha=0.3, beta=0.1 (default)
lr_main=2e-4, lr_mmllb=1e-3, alpha=0.3, beta=0.3
BTW, which type of GPU do you use? My implementation was on RTXA8000, using different types of GPUs may produce some slight variance.
对于MOSI数据集 使用作者提供代码python main.py --dataset mosi --contrast运行 运行结果为
去掉对比学习 运行代码python main.py --dataset mosi
为什么后面这面运行效果要更好一些
Hi, you need to parse the settings of hyper-paramters as the paper provides. The default setting is different from optimal setting for MOSI. For example, the default lr_main is 1e-3 but best setting for MOSI is 5e-3.
Could you please try the hps I provided in
#16 (comment)