Diamension mismatch after tensorset
srijiths opened this issue · 3 comments
srijiths commented
Hi,
I see a dimension mismatch when retrieving the image back using tensorget immediately after tensorset.
Steps to reproduce.
new_shape = 416
image_path = './demo_data/dog.jpg'
pil_image = Image.open(image_path)
numpy_img = np.array(pil_image, dtype='float32')
print('raw image shape and dtype before pre-processing', numpy_img.shape,
numpy_img.dtype)
# applied a reshaping function and expanded diamension on axis 0
image = letter_box(numpy_img, new_shape)
print('shape and dtype before creating Blobtensor ', image.shape, image.dtype)
image = rai.BlobTensor.from_numpy(image)
print('shape and dtype after creating Blobtensor :', image.shape)
con.tensorset('image', image)
img = con.tensorget('image', as_type=rai.BlobTensor).to_numpy()
print('shape and dtype when retrieving using tensorget :', img.shape, img.dtype)
Output
raw image shape and dtype before pre-processing (576, 768, 3) float32
shape and dtype before creating Blobtensor (1, 416, 416, 3) float32
shape and dtype after creating Blobtensor : (1, 416, 416, 3)
shape and dtype when retrieving using tensorget : (1, 1, 416, 416, 3) float32
There is a dimension mismatch when retrieving the image using tensorget immediately after tensorset.
Thanks ,
hhsecond commented
Hi @srijiths, Thanks a lot for the issue. The issue is fixed in master. We are pushing the new version to pypi soon
hhsecond commented
The new version is out in pypi