DingXiaoH/RepLKNet-pytorch

Get error when of using provided pytorch implementation of DWConv.

jihaonew opened this issue · 1 comments

I am using the provided DWConv implementation but get the following error.
Trackback:

    self._scaler.scale(loss).backward(create_graph=create_graph)
  File "/mnt/cache/share/spring/conda_envs/miniconda3/envs/s0.3.5/lib/python3.7/site-packages/torch/_tensor.py", line 363, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  File "/mnt/cache/share/spring/conda_envs/miniconda3/envs/s0.3.5/lib/python3.7/site-packages/torch/autograd/__init__.py", line 175, in backward
    allow_unreachable=True, accumulate_grad=True)  # Calls into the C++ engine to run the backward pass
  File "/mnt/cache/share/spring/conda_envs/miniconda3/envs/s0.3.5/lib/python3.7/site-packages/torch/autograd/function.py", line 253, in apply
    return user_fn(self, *args)
  File "/mnt/cache/share/spring/conda_envs/miniconda3/envs/s0.3.5/lib/python3.7/site-packages/torch/cuda/amp/autocast_mode.py", line 135, in decorate_bwd
    return bwd(*args, **kwargs)
  File "/mnt/cache/liujihao/.local/lib/python3.7/site-packages/depthwise_conv2d_implicit_gemm-0.0.0-py3.7-linux-x86_64.egg/depthwise_conv2d_implicit_gemm.py", line 25
, in backward
    dx = _extension.backward_data_fp32(grad, w)
RuntimeError: input must be contiguous

Any intuition to solve this problem?

Solved! Just add "grad=grad.contiguous" in the two backward functions.
I have created a pull request to the MegEngine repo.
You may simply add the two lines of code follow this.
MegEngine/cutlass@5b19383