Use 86 sentences to test this method but the result so bad.
linan2 opened this issue · 3 comments
Hi,
I use 86 sentences to train model , but the result is so bad.
If I need more data to train my model?
And when i am training my model I have large the occupancy of virtual memory, how do me tune parameter to have a less resource consumption.
Thank you very much!
I don't know how many minutes of speech you have, but I think transferring from pretrained SEGAN can help if you have low amount of your own data. Here you have the observations https://arxiv.org/abs/1712.06340
In terms of tuning the amount of consumption, it is the tensorflow Session configuration I guess. It is a TF inner thing, but you can specify a dynamic behavior rather than the full occupancy
Thanks for your reply sir. I used for reverb dataset about 86 sentences and maybe have 30+-min.When I trianing model the weigt is 1350 your model maybe about 40000 .The result is worse to result which I had not processing before not use segan,