Some question about paper
jyang68sh opened this issue · 4 comments
Hi,
So, after carefully read the paper, I am not sure if I got the paper correctly.
The paper proposed a loss which is helpful to find out abnormal class.
steps:
1. formulate D_in and D_out. D_in should not overlap with D_out
2. Train the model with D_IN
3. Retrain model with D_OUT
Question: what do you mean by fine-tune only the final classification block using the loss in (2)
Thanks!
Hi,
In step 2), training the model is implemented by other works, which is why we must load the pre-trained checkpoint.
All we do is step 3), but we are not re-training the entire mode, but only fine-tuning the last classification block as shown in here. The reason we utilise the phrase "fine-tuning" is that we also partially load the weight of that classification block in here.
On the other hand, we fine-tune the final block with both D_IN and D_OUT (in step 3). ), as the input data contains both driving scenes and the synthetic OOD regions. You can find it in Fig 2. Page 6.
Regards,
Yuyuan
closing the issue, but feel free to reopen.
The minimisation of the loss in (3) will abstain from classifying outlier
pixels into one of the inlier classes, where a pixel is estimated to be an outlier
with aω
Hi @yyliu01
From what I understood, we want to decrease the loss where it would not classify outlier pixel into one of the inlier class. But according to the paper, the minim of the PAL loss is different.
Could you please explain? Thanks!
@jyang68sh Sorry, I didn't see your post here, please reopen the issue if your feel the question is not well answered.
Your understand is totally correct, as the confident of the extra channel will add back to the inlier after divide the "reward" value. In this part, PAL guided this "reward" via an energy-based (i.e., EB) regularisation.
In case you still confuse the functions in the paper, please feel free to send me an email or re-open the issue.
Regards,
Yuyuan