run time and loss
noranali opened this issue · 2 comments
noranali commented
thank you for your code , what is the reason of that the loss is different at every time i restart the run time and run the train ?
can you help me please
jiangsutx commented
The training process include random process, e.g. random data sequence order, random initialization etc.
If you want deterministic process, you need to fix random seed for python random module, numpy random module and tensorflow random seed.
noranali commented
thank you for your replay
i have read your paper and i found that the code converge at 4000 epochs but colab will be disconnected at 180 epochs . i want to use the final weight as initial . can you help me please ?