MLP Model Training has some stochasticity - double check seeds etc.
j6mes opened this issue · 2 comments
j6mes commented
MLP Model Training has some stochasticity - double check seeds etc.
j6mes commented
Training is OK and repeatable - might be an issue with the preprocessing/generating the vocab and TF-IDF vectors with NLTK
j6mes commented
There was actually no issue here with seeds or randomness as the eval scripts ran OK.
The problem was the best weights were not loaded again at the end of training before the scores were printed to the stdout. However, if you run the eval script after training, the results are in line with the values reported in the paper.
I just edited the training script to load the best weights after training. @christos-c