yassouali/CCT

Error on custom dataset

SahadevPoudel opened this issue · 2 comments

I am having trouble while training medical images. It has RGB image and mask image and contains only one class. I just replaced the path and set num_classes = 2 in the voc.py. However, i am getting this error:

RuntimeError: 1only batches of spatial targets supported (3D tensors) but got targets of size: : [2, 320, 320, 3].

Can you help me? Also, what should i change to pallete.py?

I think this refers to your output (=ground truth) images. They should only have one channel with values 0 for background and 1 for your class.
The size [2, 320, 320, 3] refers to [ batch size, height and width (don't recall what order), number of channels ]. The number of channels should be 1 not 3, so your ground truth image cannot be an RGB image.
Personally, I haven't changed anything to pallete.py for my application. However, I set a custom color palette for my classes because I chose specific colors for my classes.
Hope this helps :)

Heyy! I'm having the same problem with medical images training. I only modified the num_classes = 2 in my own data-loader, but it always has problem during the training. However, when I change it back to 21 it will train successfully. Do I need to modify more in this case? maybe i need to write another pallete.py?