Unofficial PyTorch reimplementation of the paper Involution: Inverting the Inherence of Convolution for Visual Recognition by Duo Li, Jie Hu, Changhu Wang et al. published at CVPR 2021.
This repository includes a pure PyTorch implementation of a 2D and 3D involution.
Please note that the official implementation provides a more memory efficient CuPy implementation of the 2D involution. Additionally, shikishima-TasakiLab provides a fast and memory efficent CUDA implementation of the 2D Involution.
The 2D and 3D involution can be easily installed by using pip
.
pip install git+https://github.com/ChristophReich1996/Involution
Additional examples, such as strided involutions or transposed convolution like involutions, can be found in the example.py file.
The 2D involution can be used as a nn.Module
as follows:
import torch
from involution import Involution2d
involution = Involution2d(in_channels=32, out_channels=64)
output = involution(torch.rand(1, 32, 128, 128))
The 2D involution takes the following parameters.
Parameter | Description | Type |
---|---|---|
in_channels | Number of input channels | int |
out_channels | Number of output channels | int |
sigma_mapping | Non-linear mapping as introduced in the paper. If none BN + ReLU is utilized (default=None) | Optional[nn.Module] |
kernel_size | Kernel size to be used (default=(7, 7)) | Union[int, Tuple[int, int]] |
stride | Stride factor to be utilized (default=(1, 1)) | Union[int, Tuple[int, int]] |
groups | Number of groups to be employed (default=1) | int |
reduce_ratio | Reduce ration of involution channels (default=1) | int |
dilation | Dilation in unfold to be employed (default=(1, 1)) | Union[int, Tuple[int, int]] |
padding | Padding to be used in unfold operation (default=(3, 3)) | Union[int, Tuple[int, int]] |
bias | If true bias is utilized in each convolution layer (default=False) | bool |
force_shape_match | If true potential shape mismatch is solved by performing avg pool (default=False) | bool |
**kwargs | Unused additional key word arguments | Any |
The 3D involution can be used as a nn.Module
as follows:
import torch
from involution import Involution3d
involution = Involution3d(in_channels=8, out_channels=16)
output = involution(torch.rand(1, 8, 32, 32, 32))
The 3D involution takes the following parameters.
Parameter | Description | Type |
---|---|---|
in_channels | Number of input channels | int |
out_channels | Number of output channels | int |
sigma_mapping | Non-linear mapping as introduced in the paper. If none BN + ReLU is utilized | Optional[nn.Module] |
kernel_size | Kernel size to be used (default=(7, 7, 7)) | Union[int, Tuple[int, int, int]] |
stride | Stride factor to be utilized (default=(1, 1, 1)) | Union[int, Tuple[int, int, int]] |
groups | Number of groups to be employed (default=1) | int |
reduce_ratio | Reduce ration of involution channels (default=1) | int |
dilation | Dilation in unfold to be employed (default=(1, 1, 1)) | Union[int, Tuple[int, int, int]] |
padding | Padding to be used in unfold operation (default=(3, 3, 3)) | Union[int, Tuple[int, int, int]] |
bias | If true bias is utilized in each convolution layer (default=False) | bool |
force_shape_match | If true potential shape mismatch is solved by performing avg pool (default=False) | bool |
**kwargs | Unused additional key word arguments | Any |
@inproceedings{Li2021,
author = {Li, Duo and Hu, Jie and Wang, Changhu and Li, Xiangtai and She, Qi and Zhu, Lei and Zhang, Tong and Chen, Qifeng},
title = {Involution: Inverting the Inherence of Convolution for Visual Recognition},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021}
}