Hi author, is there any problem with your code below? Regarding the dimension of labels, after going through labels=labels[: , 0, :], shouldn't labels have become two-dimensional?
Closed this issue · 3 comments
for i in range(labels.shape[1]):
assert outputs[i].shape == labels[:, i, :].shape
non_nan = ~torch.isnan(labels[:, i, :])
if non_nan.any():
loss = self.criterion(outputs[i][non_nan], labels[:, i, :][non_nan])
loss.backward(retain_graph=True)
losses[i] += loss.detach()
Hi, sorry for the late response.
It's because, before "labels=labels[:, 0, :]", the shape of labels is [b, p, 1, 1], where b and p denotes batch size and the number of patches. And actually "labels=labels[:, 0, :]" equals to "labels=labels[:, 0, :, :]"
Hi, sorry for the late response.
It's because, before "labels=labels[:, 0, :]", the shape of labels is [b, p, 1, 1], where b and p denotes batch size and the number of patches. And actually "labels=labels[:, 0, :]" equals to "labels=labels[:, 0, :, :]"
Thank you so much for your prompt and helpful response! I appreciate your time and effort in explaining the issue to me.
😊