Question about positional encoding in tpvformer04
MrRexy-Ling opened this issue · 0 comments
MrRexy-Ling commented
Hi! The positional encoding mask you created is only in hw plane, may I ask the reasons for it?
In TPVFormer-main/tpvformer04/tpv_head.py:
self.positional_encoding = build_positional_encoding(positional_encoding)
tpv_mask_hw = torch.zeros(1, tpv_h, tpv_w)
self.register_buffer('tpv_mask_hw', tpv_mask_hw)
tpv_mask_hw = self.tpv_mask_hw.expand(bs, -1, -1)
tpv_pos_hw = self.positional_encoding(tpv_mask_hw).to(dtype)
tpv_pos_hw = tpv_pos_hw.flatten(2).transpose(1, 2)