TypeError: unsupported operand type(s) for //: 'NoneType' and 'int' for transunet_2d and SWIN Unet
sm-potter opened this issue · 0 comments
sm-potter commented
The full traceback is:
File ~/.conda/envs/deeplearning3/envs/deeplearning_trans/lib/python3.9/site-packages/keras_unet_collection/_model_transunet_2d.py:339, in transunet_2d(input_size, filter_num, n_labels, stack_num_down, stack_num_up, embed_dim, num_mlp, num_heads, num_transformer, activation, mlp_activation, output_activation, batch_norm, pool, unpool, backbone, weights, freeze_backbone, freeze_batch_norm, name)
336 IN = Input(input_size)
338 # base
--> 339 X = transunet_2d_base(IN, filter_num, stack_num_down=stack_num_down, stack_num_up=stack_num_up,
340 embed_dim=embed_dim, num_mlp=num_mlp, num_heads=num_heads, num_transformer=num_transformer,
341 activation=activation, mlp_activation=mlp_activation, batch_norm=batch_norm, pool=pool, unpool=unpool,
342 backbone=backbone, weights=weights, freeze_backbone=freeze_backbone, freeze_batch_norm=freeze_batch_norm, name=name)
344 # output layer
345 OUT = CONV_output(X, n_labels, kernel_size=1, activation=output_activation, name='{}_output'.format(name))
File ~/.conda/envs/deeplearning3/envs/deeplearning_trans/lib/python3.9/site-packages/keras_unet_collection/_model_transunet_2d.py:161, in transunet_2d_base(input_tensor, filter_num, stack_num_down, stack_num_up, embed_dim, num_mlp, num_heads, num_transformer, activation, mlp_activation, batch_norm, pool, unpool, backbone, weights, freeze_backbone, freeze_batch_norm, name)
158 input_size = input_tensor.shape[1]
160 # encoded feature map size
--> 161 encode_size = input_size // 2**(depth_-1)
163 # number of size-1 patches
164 num_patches = encode_size ** 2
TypeError: unsupported operand type(s) for //: 'NoneType' and 'int'
I am using numpy 1.19.5.
I do not encounter this error for either unet_3plus_2d or unet_plus_2d.