normalization into [0, 1] using window width/level of 1500/-650
aivision2022 opened this issue · 3 comments
Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
To segment COVID-19 pneumonia lesions from your own images, make sure that the images have been cropped into the lung region, and the intensity has been normalized into [0, 1] using window width/level of 1500/-650
=> you mentioned the message above. I would like to test some dicom or nifti files with your code. Could you let me know how to do the pre-processing mentioned above with my own dicom or nifti files ?
it's possible to achieve the windowing with
monai.transforms.ScaleIntensityRange(
a_min=-650-(1500/2.), a_max=-650+(1500/2.), b_min=0, b_max=1
)
thank you for your reply. I will check your idea.
Basically, I want to do the process from dcm files to png images, and then png images to a nifti file.
So inside the code, there is the logic below. I am not sure if it is OK.
I
dcm = pydicom.dcmread(filepath)
print(dcm)
pixel_array = dcm.pixel_array # dicom image
plt.imshow(dcm.pixel_array, cmap='gray')
window_min = -650
window_max = window_min + 1500
pixel_array = (pixel_array - window_min) / (window_max - window_min)
pixel_array[pixel_array < 0] = 0
pixel_array[pixel_array > 1] = 1
pixel_array *= 255
pixel_array = pixel_array.astype("uint8")
Additionally, I have some questions. Thank you for your kind reply.
I don't know how to open your result file, case_1_seg.nii.gz
-
I have tried to open it by https://socr.umich.edu/HTML5/BrainViewer. but failed it.
-
I don't understand why the result nifti file is smaller than the original.
case_1.nii.gz 31.7MB (input) vs. case_1_seg.nii.gz 756K (output) -
After run run_inference.py, how to get the image files like yours, [img.png](https://github.com/Project-MONAI/research-contributions/blob/main/coplenet-pneumonia-lesion-segmentation/fig/img.png, or https://github.com/Project-MONAI/research-contributions/blob/main/coplenet-pneumonia-lesion-segmentation/fig/seg.png
-
Because of Runtime Error, I have changed the padding mode of torch.no_grad() to "replicate" from "circular".
I am not sure, if the result from change of padding mode has any problem or not.
#val_images, roi_size, sw_batch_size, model, 0.0, padding_mode="circular"
#RuntimeError: Padding value causes wrapping around more than once.
val_images, roi_size, sw_batch_size, model, 0.0, padding_mode="replicate"