Why bias in Q, K, V projection of SpatialSelfAttention?
SimeonZhang opened this issue · 0 comments
SimeonZhang commented
stable-diffusion/ldm/modules/attention.py
Line 99 in 21f890f
As I understand, other implementations of attention except for SpatialSelfAttention
in this module are set with bias=False. Why is it different?
Any explanation will be greatly appreciated.