floodsung/LearningToCompare_FSL

output with shape [1, 28, 28] doesn't match the broadcast shape [3, 28, 28]

xun6000 opened this issue · 13 comments

change the image into rgb doesn't solve this problem

I tried to merge task_generator.py inside omniglot test one shot and now I am facing the same issue.

Now I am also facing the same issue.

@xun6000
It can be fixed by removing 'normalize' in transforms.Compose() like this:
dataset = Omniglot(task,split=split,transform=transforms.Compose([Rotate(rotation),transforms.ToTensor()]))
It does not affect the peformance (accuracy).
I guess the reason is that Omniglot is black(character) and white(background), so it is not critical to the performance.

Not sure if anyone faces this
i removed the 'normalize' in the trasnforms.Compose() and got this

Traceback (most recent call last):
File "omniglot_train_one_shot.py", line 260, in
main()
File "omniglot_train_one_shot.py", line 188, in main
one_hot_labels = Variable(torch.zeros(BATCH_NUM_PER_CLASS*CLASS_NUM, CLASS_NUM).scatter_(1, batch_labels.view(-1,1), 1)).cuda(GPU)
RuntimeError: Expected object of scalar type Long but got scalar type Int for argument #3 'index'

@fnever520 Were you able to resolve this issue?

@kamalaVJ Unfortunately no, still not able to resolve. Can you shed me some lights?

@kamalaVJ Unfortunately no, still not able to resolve. Can you shed me some lights?

I was able to resolve this by removing normalize in the task generator or you can try running it on pytorch 0.3.

@kamalaVJ Unfortunately no, still not able to resolve. Can you shed me some lights?

I was able to resolve this by removing normalize in the task generator or you can try running it on pytorch 0.3.
You are right!

I also meet the same issue. could you please help. thanks

hello,

I think of how to solve it?

0. My Environment

OS: Ubuntu 16.04
Python: Python 3.7.3
PyTorch: 1.2.0

1. Why This Occur

I think this is the problem of omniglot dataset itself.
Using PIL.Image to open the image, we can find that the loaded image has only 1 channel, which is different from miniImageNet's RGB 3 channel.

However, in ./omniglot/task_generator.py,'s get_data_loader method, we can find that the normalize's mean and std does have 3 items, representing (assuming) the input tensor should have 3 channels. And that's why it occurs.

2. How to fix It?

Just delete it mean and std's last 2 numbers, just like the code follows:

    normalize = transforms.Normalize(mean=[0.92206,], std=[0.08426,])  # FIX: use 1 channel, instead of 3 channels. 

And then it is okay.

yours sincerely,
@WMF1997

change the image into rgb doesn't solve this problem

@xun6000 Indeed, changing the omniglot dataset's image to RGB (1 channel -> 3 channel) does not solve it.
emm... if you change it to RGB, and change the CNNEncoder's 1st layer's input channel is 3, then it does fix it. But.... it may affect the result?

yours sincerely,
@WMF1997

Now I am also facing the same issue

i face this issue ,use Just delete it mean and std's last 2 numbers, just like the code follows method 。but now i have another issue
image
loss.data[0] is null?