Implementation about kernel activation
lifuguan opened this issue · 1 comments
lifuguan commented
Hello,
Sorry to disturb you. I'm trying to visualize the kernels (called object_feats
in your code). It've been illustrated in your paper.
Here is my code, which aims to save and add them on kernels.npy
during the inference phrase.
"""kernel_iter_update.py line:296"""
results.append(single_result)
from debugger import save_test_info
save_test_info(img_metas, scores_per_img, masks_per_img, object_feats)
return results
def save_test_info(img_metas:list,
cls_score:torch.Tensor,
scaled_mask_preds:torch.Tensor, obj_feats:torch.Tensor):
...
# kernels
if obj_feats is not None:
kernels_old = np.load("work_dirs/tmp/kernels.npy")
kernels_new = obj_feats.to('cpu').detach().numpy()
kernels = kernels_new+kernels_old
np.save("work_dirs/tmp/kernels.npy", kernels)
"""after inference phrase"""
fig,a = plt.subplots(10,10)
kernels_2dim = kernels.reshape((100,16,16))
for i in range(100):
# a[int(i / 10)][i % 10].set_title(i)
a[int(i / 10)][i % 10].set_xticks([])
a[int(i / 10)][i % 10].set_yticks([])
a[int(i / 10)][i % 10].imshow(kernels_2dim[i], cmap = plt.cm.hot_r)
plt.savefig('work_dirs/tmp/class_80_ins_2/kernel_2dim.png', bbox_inches='tight')
plt.show()
However, the result is completely different from your figures:
It will be appreciated if anyone can show me the way to visualize kernel correctly.
ZwwWayne commented
You should not save kernels, but the masks. The paper says "... the average of mask activations of the 100 instance kernels over the 5000 images in the val split. " Thus, the mask predictions after sigmoid are saved and averaged, rather than the kernels.