NVlabs/SegFormer

k v compression in attention may causes small targets and detail to be lost?

Opened this issue · 0 comments

In attention, to reduce the calculation amount, kv is compressed, small targets and detail will lose ?