QC assessment is not wroking
Opened this issue · 7 comments
I want to assess your models in our work and compare it with our in-house models. I got the tissue segmentation working with your instructions on the GitHub page, but the QC part does not work. I provided the inputs and it sticks to "Processing <slide_id>.svs" stage after printing these lines (the warning is something normal):
"""
/opt/conda/envs/imaging-compath/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
warnings.warn(
/opt/conda/envs/imaging-compath/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or None
for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing weights=ResNet18_Weights.IMAGENET1K_V1
. You can also use weights=ResNet18_Weights.DEFAULT
to get the most up-to-date weights.
warnings.warn(msg)
=> loading checkpoint '/home/checkpoint_106.pth'
=> loaded checkpoint '/home/checkpoint_106.pth' (epoch 107)
slides mpp manually set to 1.0
"""
Is there a trick to pass this step?
Are you running the code on GPU?
QC generates multiple outputs, both images and text files.
Yes, I am running the system on a machine with GPU. Should I somehow let the code know that I am using GPU (I do not see any arguments for setting the GPU for quality assessment code)? The tissue segmentation code uses the GPU without any problem, but the output folder has the ".jpg" file; I do not see any text files. That might be the issue!
I ran it one more time to see if it is just slow because of using CPU (although GPU is available); since yesterday morning, it is still in the processing of one slide.
Is there anything that I can do?
We have tested this on a CPU only device for a large cohort and it shouldn't take that long at all! Can you please check Line 136 in "quality-assessment/run.py"? Do you see the message "Processing YourSlideName"? For debugging you can simply comment out process_tiles (line 148).
This will skip processing the slide but then should generate empty files and images quickly. If you still get no output, there should be a problem with the initial configuration. Are you running this on a TCGA slide?
Sorry, I got really busy with a couple of tasks.
Can you please check Line 136 in "quality-assessment/run.py"? Do you see the message "Processing YourSlideName"?
Yes, I see that
For debugging you can simply comment out process_tiles (line 148). This will skip processing the slide but then should generate empty files and images quickly.
I did this and it generated the output images, and it did it really fast.
Are you running this on a TCGA slide?
No, we have our own slides, but they are from both Philips and leica, and they are normal images.
We have tested this on a CPU only device for a large cohort and it shouldn't take that long at all! Can you please check Line 136 in "quality-assessment/run.py"? Do you see the message "Processing YourSlideName"? For debugging you can simply comment out process_tiles (line 148). This will skip processing the slide but then should generate empty files and images quickly. If you still get no output, there should be a problem with the initial configuration. Are you running this on a TCGA slide?
Is there anything we can do to fix this?