CT image segmentation is not good
jianjun0407 opened this issue · 3 comments
Thank you for sharing. I tried to perform the task of lung parenchyma segmentation directly on CT images, but the results were not good. The following is my result map, please ask what the possible reasons are?
normalization as described in the paper, clip is used first and then normalization to [0,1]
Hi, been having the same issue. First of all congratulations on the work, pretty amazing. Actually tried with different types of images with very bad results, even with images of the type used in the paper so I do not understand what the problem is. Here is an example with cardiac mri image.
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
import torch
im = np.load(r'C:\Users\Manuel\Desktop\mine\Few shot test\MMs_micai\OpenDataset\Training\Labeled\A0S9V9\diastole.npy')[:,2:-2,2:-2,:]
ref = np.load(r'C:\Users\Manuel\Desktop\mine\Few shot test\MMs_micai\OpenDataset\Training\Labeled\A1E9Q1\diastole.npy')[:,2:-2,2:-2,:]
ref_seg = ref[1]
ref_seg = np.swapaxes(ref_seg,0,2)
ref_seg = np.swapaxes(ref_seg, 1,2)
ref = ref[0]
ref = np.swapaxes(ref,0,2)
ref = np.swapaxes(ref, 1,2)
im = im[0]
im = np.swapaxes(im,0,2)
im = np.swapaxes(im, 1,2)
ref_seg[ref_seg<3] = 0
im = torch.as_tensor(im)
ref = torch.as_tensor(ref)
ref_seg = torch.as_tensor(ref_seg)
im = torch.nn.functional.interpolate(im[None, None],(12,128,128),mode='trilinear')
ref = torch.nn.functional.interpolate(ref[None, None],(13,128,128),mode='trilinear')
ref_seg = torch.nn.functional.interpolate(ref_seg[None, None],(13,128,128))
im = torch.swapaxes(im,0,2)[:,0,:,:,:]
ref = torch.swapaxes(ref,0,2)[:,0,:,:,:]
ref_seg = torch.swapaxes(ref_seg,0,2)[:,0,:,:,:]
# im = torch.unsqueeze(im,1)
# ref = torch.unsqueeze(ref,1)
# ref_seg = torch.unsqueeze(ref_seg,1)
logits = model(
im[5:6], # (B, 1, H, W)
ref[None,4:8], # (B, S, 1, H, W)
ref_seg[None,4:8]) # (B, S, 1, H, W))
prediction = torch.sigmoid(logits)
plt.imshow(im[6,0],cmap='grey')
plt.imshow(prediction[0,0].detach().numpy()>0.5,alpha=0.6)
The segmentation generated is like this:
while an example of the support is like this:
I clearly see a tendency which I've seen in other types of images were the predicted mask tends to be shifted and located more closely around the support mask rather than the real target, does this mean that both the support and target regions should be located at the same location within the image? for now I have only tested as support different slices for the same image, don't know if adding also different volumes that could have the target located at different locations could solve this. If the modelhas this bias tendency it may be solved with more labeled support cases but haven't been able to try for now. I post it also in order to see if this supposition is correct. And additionally would like to also know how many support images should be used in order to start seeing good segmentations, in the colab example I saw that with 10 the results were pretty good, in the code provided I used 4, and in other tries I used like 5-6. Should I increase or for tasks such as this these numbers should be ok?
Here is an image with a different example for different regions within spine MRI:
The segmentations are bad but the shift here is pretty clear (the support used was slightly to the right, just as the predcited seggmentations).
I can provide the files I employed which already contain normalized values, the code above also takes care of dimensionality rearragments. The first loaded file is the diastole0, the second is diastole1 in case anyone wants to try reproduce this.
python version is 3.9.13
Again, I'll be trying to test some of the questions I'm writing but if anyone already knows this (expect the authors to have more insights into these questions)
Thanks in advance
@jianjun0407 @manuel-lincbiotech Can you please share the colab file to run on other datasets which you have handy?
Greetings! @jianjun0407 could you please provide the code you used to get this segmentation? I agree that it seems quite bad.
@manuel-lincbiotech thank you very much for the analysis and effort. I was wondering if you could visualize what your support sets looked like for these segmentations? It's true that when we built supports they were always from different subjects, not from slices of the same subject, so this could potentially be a place for mal-alignment (for spine we didn't actually need that large of a support set).