Proper out_shape for NumPy resize?
Daiver opened this issue · 3 comments
Hi! First of all - thank you for awesome library.
I succeed with resizing PyTorch Tensors using channels first representation. But i cannot figure out which out shape should i use for NumPy ndarrays. I tried to directly use channels last image using following code:
import resize_right
import cv2
img = cv2.imread("path_to_my_image")
target_height_width = (img.shape[1] // 8, img.shape[0] // 8)
print(img.shape)
res = resize_right.resize(img, out_shape=target_height_width)
print(res.shape)
Input image has shape (2048, 2048, 3)
, but output is (256, 256, 2048)
which is quite strange.
If i convert my image to channels first via np.rollaxis(2, 0)
it returns (256, 256, 3)
shape, which is correct but not consistent with input. But result looks broken (basically vertical color lines).
Can you help me to figure out how to use resize_right properly?
Just found, that if i specify full out_shape
for NumPy it works. Still looks for me as a problem because behavior is not consistent with Torch.
This problem seems come from this.
The correct operation should be:
out_shape = (list(out_shape) + list(in_shape[len(out_shape):])
if fw is numpy
else list(in_shape[:-len(out_shape)]) + list(out_shape))
Thanks, new PR by @LuoXin-s resolves this issue