Does the size consistency loss only affect the size residual?
lilanxiao opened this issue · 2 comments
Hi, thank you very much for your nice work.
I have a question about the size consistency loss. The function compute_size_consistency_loss
uses the following code to get the size of bounding boxes:
size_class = torch.argmax(end_points['size_scores'], -1)
...
size_base = torch.index_select(mean_size_arr, 0, size_class.view(-1))
...
size = size_base + size_residual
And the consistency loss is calculated using MSE. Since torch.argmax()
is non-differentiable, this loss seems only to affect the prediction of size residual and has no direct influence on the prediction of the size class. From my point of view, the size consistency loss should use KL-divergence as an additional term to minimize the difference of size scores generated by the teacher and student (like the class consistency loss). But your code doesn't do it and still has great performance.
Is it intended behavior? Are there any intuitions behind it?
Hi, thanks for your interest on our work.
In our implementation, each class only has one size template. In other words, the 'size_class_label' and 'sem_cls_label' (i.e., ground truths of size class and semantic class) for one object are the same; the predictions of size class and semantic class should be similar. Hence, the 'size_residual' has more influence in the size consistency loss computation.
I think it is helpful to add an additional term to minimize the difference of size scores between two networks, if each class has multiple size templates. If you are interested to try out that, please let me know the results. :)
Yeah, that makes sense. Thank you for your explanation!
I'm going to close this issue. If would get interesting results, I'm glad to share them here.