sony/CLIPSep

MUSIC dataset experiment

Opened this issue · 1 comments

Hi, thank you for providing the awesome code in public!
I have followed the preprocessing steps and run the following command to train the model on the MUSIC dataset.

python train.py -o exp/music -t data/MUSIC/solo/train.csv -v data/MUSIC/solo/val.csv --image_model clipsepnit

However, the validation loss did not decrease. Could you share the training scripts for the MUSIC dataset?

step,train_loss,val_loss
10000.000000,0.135361,0.002254,0.000003
20000.000000,0.117345,0.004001,0.000024
30000.000000,0.106620,0.003101,0.000002
40000.000000,0.101036,0.001311,0.000001
50000.000000,0.097306,0.004897,0.000001
60000.000000,0.095004,0.002296,0.000004
70000.000000,0.091312,0.003287,0.000001
80000.000000,0.090387,0.011014,0.000000
90000.000000,0.086588,0.004957,0.000000
100000.000000,0.086544,0.006983,0.000000
110000.000000,0.086237,0.005147,0.000000
120000.000000,0.087733,0.004942,0.000000
130000.000000,0.084574,0.005230,0.000003
140000.000000,0.085653,0.005045,0.000000
150000.000000,0.085149,0.007155,0.000001
160000.000000,0.084418,0.006123,0.000003
170000.000000,0.082859,0.007412,0.000000
180000.000000,0.085397,0.004867,0.000002
190000.000000,0.083519,0.006751,0.000006
200000.000000,0.084105,0.008854,0.000006

Thank you in advance.

@naoya-takahashi-sony just in case, I mention you..