When I train the model, the following is printed out.
JeonHyeongJunKW opened this issue · 3 comments
I ran the code below on Google colab.
python main.py --mode=train --arch=vgg16 --pooling=netvlad --num_clusters=64
After a while, the output as below came out.
The first is
/usr/local/lib/python3.6/dist-packages/sklearn/neighbors/_base.py:621: UserWarning: Loky-backed parallel loops cannot be called in a multiprocessing, setting n_jobs=1 n_jobs = effective_n_jobs(self.n_jobs)
The second is
[ True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True True]
The second case is not all like this, but it is a similar form.
I wonder if this output between learning is a symptom that makes wrong results.
For the first warning see #49.
I've never seen the second output, I also have never run this code inside a notebook so perhaps that causes it to output something. Given that there are a 100 bools there I expect its related to the negative mining (https://github.com/Nanne/pytorch-NetVlad/blob/master/pittsburgh.py#L233-L239), no idea why it would output any of that though.
I forgot to put print(violatingNeg)
in the middle of the code interpretation.Thank you for your kind reply. :) Also, how much time do I need for one epoch when using GPU?