Running train multiple times (same inputs) results in different models
Opened this issue · 1 comments
mathisloevenich commented
Hey, I am using your Tool to write my Bachelor thesis in matching social network profiles.
It's working fine but as you might know, Reproducibility
is a major factor in writing a thesis.
However I came to notice that when I run the setup multiple times with the same inputs,
the training behaves differently and creates different kind of models (sometimes astonishing good models and sometimes comparably bad ones)
I did not find any documentation about this. Is randomness a key thing or did I get something wrong.
If you have any documentation about deeper insights into the process, please hand it out to me.
mathisloevenich commented
I want to mention that I am not retraining the existing model