How to bring down the number of epochs in example python notebook and generate samples based on log files?
soft-nougat opened this issue · 6 comments
Hey Team!
Just looking to test this package out, so I lowered the nr of epochs in the example\config.json file to 1. Still, the model won't take 1 epoch into consideration and continues to run the second epoch. Am I looking at the wrong file? Should I bring down epochs somewhere else?
On the other hand, can you please advise how I can sample data by using the log files created in the output folder? I. e. can I just run the files created there and generate data based on them?
Thanks so much!
Hi @soft-nougat, and thanks for your interest!!
It seems like you might be using the old version of TGAN?
From the latest version on, TGAN can be used directly from Python.
Please check the quickstart from the documentation, as it will probably clarify most of your doubts: https://dai-lab.github.io/TGAN/readme.html#quickstart
Hey Manuel!
Thanks for the reply. I should be using the latest version as I installed the tgan package.
Also, I am following the quickstart, but there are no instructions on how to lower the epochs in the fit model step. As mentioned, I changed the number of epochs in the config json file but it didn't work.
Thanks!
Tia
HI @soft-nougat,
Also, I am following the quickstart, but there are no instructions on how to lower the epochs in the fit model step.
You have to set them when creating the TGANModel
instance, you can have a full reference of the constructor arguments here
As mentioned, I changed the number of epochs in the config json file but it didn't work.
That's what confused me, as the config.json
is used only with the CLI to run the random hyperparameter search. Since you are using the Python API, you can ignore it and set the number of epochs as specified on the link I shared above
You have to set them when creating the
TGANModel
instance, you can have a full reference of the constructor arguments here
I think it might be better to move the epochs/batch_size and other training related parameters to the .fit()
call. This both aligns it better with the .fit()
of other APIs like keras and fastai and it feels a bit weird when you're loading a model from disk, and still need to specify the epochs and other training related parameters.
I think it might be better to move the epochs/batch_size and other training related parameters to the .fit() call. This both aligns it better with the .fit() of other APIs like keras and fastai and it feels a bit weird when you're loading a model from disk, and still need to specify the epochs and other training related parameters.
This make quite sense, could you please open a new issue so we can continue this discussion there?
@soft-nougat, did you manage to solve your issue? Please give me a thumbs up so I can close this issue. Thanks
Thanks Manuel and Bauke!
Very helpful, I will close the issue. :)
Tia