AppleHolic/source_separation

curriculum learning and test them, solve reproduce issue

Closed this issue · 5 comments

All cases are trained on fixed learning rate and steps. Adopt decaying learning rate on convergence point

Commented on #10

When I found that bug, it was already over to train first best model. So I decided to check out in double.
So, when I test above issue to reproduce exam, I saw that loss curve seems same like uploaded best checkpoint file. At this moment, it should be contained audioset files to reproduce that result. (It's under training)
I checked downloaded audioset files, and these are correct that contains 18055 files and volume normalized 22.05k. I will check continuously overall process.

Test samples are almost same. After training model, it will be reported.

  • Simple PESQ Report : #9 (comment)

  • WSDR Losses (valid on 200k steps)

    • with audioset : -0.907
    • without audioset : -0.938

Model is trained without audioset, it seems overfitting on voice bank dataset. (on testset)

  • PR Plan
    1. First I will upload 200k result (with audioset)
    2. (WIP) It is training after 200k steps with decayed learning rate. After training model, it will be uploaded.

#15 pushed middle of result

remain

  • check process codes of audioset
  • get more training model with decayed learning rate

Handle another issue to get better result. close it