VITA-Group/SLaK

Code error

guozhiyao opened this issue · 1 comments

Hi, the feature size mismatch here when I use the normal Conv in get_conv2d.

out = self.LoRA1(inputs) + self.LoRA2(inputs)

You should change into

self.LoRA1 = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=(kernel_size, small_kernel),
                                      stride=stride, padding=(padding, small_kernel//2), dilation=1, groups=groups, bn=bn)
self.LoRA2 = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=(small_kernel, kernel_size),
                                     stride=stride, padding=(small_kernel//2, padding), dilation=1, groups=groups, bn=bn)

Does the sparseConv calc the pad automatically ?

Besides, the merge_kernel reports error when I set Decom=True. Could you fix the error?

Hi,

To use normal Conv of Pytorch, you can change our sparse Conv to:

nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, groups=groups, bias=bias)

Regarding the padding, our sparse Conv uses zero padding as default to perform convolutions.

We did not merge kernels at inference, you can set small_kernel_merged=False which is our default setting to fix it.