DingXiaoH/RepLKNet-pytorch

depthwise_conv2d_implicit_gemm slower than nn.Conv2d

wdmwhh opened this issue ยท 10 comments

๐Ÿ› Describe the bug
Calling depthwise_conv2d_implicit_gemm.DepthWiseConv2dImplicitGEMM, on CUDA, is orders of magnitude slower than calling torch.nn.Conv2d.

I have installed it according to README.
image

cc: @DingXiaoH
Versions
torch 1.8.2+cuda11.1
cuda-11.1.1 + cudnn-8.1.1
both A100 and V100

Here I add the speed test dwblocks_speed.py.
image

test on
python 3.7.11 + torch 1.8.2 + cuda-11.1.1 + cudnn-8.1.1 + V100

10x slower: depthwise_conv2d_implicit_gemm.DepthWiseConv2dImplicitGEMM takes 0.01943465073903402s while nn.Conv2d takes 0.0012518405914306641s.

Hi, I checked the code and found no "synchronized()" so that the time recorded may not be the actual running time on GPU. I would suggest you follow the speed test script of Swin (https://github.com/microsoft/Swin-Transformer/blob/main/main.py#L287)

The test code is a small replication of the phenomenon (depthwise_conv2d_implicit_gemm slower), which occurred in training a large model.

The code that adds torch.cuda.synchronize() before calling time.time() gives rather close time to the original code.

This implementation is not suited for small batch sizes. In this case the batch size is 1, so the cutlass implmentation is slower than pytorch. You can try megengine instead.

Thanks for your reply. It help me a lot.

I meet the same question.

I trained ATSS detector with ReoLKNet31B and batch_size 1(2080Ti GPU, 11 GB memory..., and 'use_checkpoint' seems to be not compatible with DDP):

  • when use torch.nn.Conv2d(), training time is about 1.00s per iteration.
  • when use DepthWiseConv2dImplicitGEMM, training time is about 4.87s per iteration.

Hi, I encountered with the same problem.
When using nn.Conv2d, the running time of the model is just ~0.5s,
while using the DepthWiseConv2dImplicitGEMM, the time is ~6s.
The batchsize is set to 1 owing to the memory (RTX3060, 1 single GPU, 12G).

Thank you for sharing the results. As explained by @xiaocenxiaocen , our implementation is designed to pursue high throughput. Larger the batch size, higher the throughput.