can not get the same test result
mingminzhen opened this issue · 4 comments
I use pretrained model to inference: python eval.py --ms-mirror True --inst-prune True --eval-sal True
and use matlab code to evalate, the result is as follows:
cosnet adnet mmadnet
J mean 0.805 0.817 0.803
J recall 0.931 0.909 0.900
J decay 0.044 0.022 0.021
F mean 0.795 0.805 0.793
F recall 0.895 0.851 0.847
F decay 0.050 0.006 0.004
T (GT 0.088) 0.184 0.225 0.228
adnet is from your provided files. mmadnet is from me.
I am not sure what is wrong. Is it some parameters wrong?
Hi @mingminzhen,
Thank you. The issue is purely organizational and trivial, and requires no change of code to resolve.
To doubly make sure this is the case though, could you verify the following for me?
(1) In your "detection" folder, you see a subfolder that is also named "detection".
(2) A folder named "inst_prune" does not exist (this is the folder to house the generated pruning masks after running detection_filter.py).
These really shouldn't have happened and are due to my carelessness in writing "unzip detection.zip -d detection" in the README. The correct command is "unzip detection.zip" to avoid creating a parent folder to the "detection" folder in the zip file. I've updated the README now.
So, simply, you can either (1) move the "detection" subfolder out of its parent folder (which is also named "detection") and correspondingly delete the old parent folder, or (2) remove your current "detection" folder and unzip again with the command "unzip detection.zip" which simply extracts the zip file's content without creating a wrapper folder.
Then everything else is the same as before--- re-run "python detection_filter.py" (when previously you would have seen a bunch of "no detection on: XXX" printed to stdout, because the program expects the pickle file paths to be detection/*.pkl rather than detection/detection/*.pkl); re-run "python eval.py --ms-mirror True --inst-prune True --eval-sal True" and verify the results, which is 0.821 J on my end, slightly higher than the 0.817 J that was reported in the paper.
Let me know. Thanks again.
yes, I can get the similar result by following new method.
@yz93 Hi, i want to ask a question about trianing. In the training step, do you use any other training data like salient detection data (DUTS) to train deeplabv3?
@mingminzhen No problem. No we didn't. We only used the DAVIS16 training set.