Why bias sharing?
Opened this issue · 0 comments
parap1uie-s commented
Hi, lxtGH.
Thanks for the implement, however, there is some details that confused me.
As defined under this line
X_h2h = F.conv2d(X_h, self.weights[0:end_h_y, 0:end_h_x, :,:], self.bias[0:end_h_y], 1,
self.padding, self.dilation, self.groups)
X_l2l = F.conv2d(X_l, self.weights[end_h_y:, end_h_x:, :,:], self.bias[end_h_y:], 1,
self.padding, self.dilation, self.groups)
X_h2l = F.conv2d(X_h2l, self.weights[end_h_y:, 0: end_h_x, :,:], self.bias[end_h_y:], 1,
self.padding, self.dilation, self.groups)
X_l2h = F.conv2d(X_l, self.weights[0:end_h_y, end_h_x:, :,:], self.bias[0:end_h_y], 1,
self.padding, self.dilation, self.groups)
why the calculation of X_h2h
and X_l2h
sharing convolution bias? Which same as X_l2l
and X_h2l
.
I didn't find any details about bias sharing written in the paper, is this sharing reasonable?
Thanks in advance.