High resolution T1w image segmentation seems
Closed this issue · 8 comments
Good morning!
I used SynthSeg 2.0 to segment a T1w image acquired at 7T.
While the overall segmentation is generally ok, some areas are rather poorly segmented (see the left and right superior frontal gyrus in the screenshot below for instance).
Do you have any recommandation as to how to improve these results?
Do you think that fine-tuning the current model with 7T data is necessary, or is there another way?
Thanks in advance for your help, and thank you for the very nice model and package!
When you say you used SynthSeg 2.0, could you please give me the command you used ? Like did you use the --robust flag ? I suspect the results will be better without this flag, because here you have an image of very good quality, and for this the regular SynthSeg model (ie without --robust) is better.
I ran python scripts/commands/SynthSeg --i t1.nii.gz --o output_folder
, so I was not using --robust
:)
Hmm okay, well I don't know, this image looks pretty good to me. Have you observed consistent errors across a bunch of images or is it jus this one ?
Empirically, I observe rather consistent errors across subjects. In particular, voxels in the CSF are often labelled as gray matter.
The previous freesurfer segmentation algorithm was more conservative in that regard (less CSF voxels were labelled as gray matter)
In the screenshot bellow, segmentation for the same subject with the previous fs segmentation algorithm is shown on the left, with synthseg on the right ; I have circled in pink one area which I think is problematic, and could exhibit other similar errors.
Besides, I think these segmentations look fine as well ; the model is fast and efficient, it's a great contribution :)
My point is that it seems to underperform in areas which seem rather easy to segment. Maybe adding (more) high-field examples to the training dataset could prevent this from happening?
Moreover, in case I were to fine-tune the current model, how many labeled 7T images do you think would be needed? :)
Thanks for the feedback, really much appreciated :) I agree with you, the segmentations look pretty good nearly everywhere, but could indeed be improved at places.
So the thing is that SynthSeg is not trained on real images but on synthetic data that don't seek to be realistic (that's the whole domain randomisation idea). So this means I cannot "add more 7T images to the trainign data" ;)
You could try to fine-tune SynthSeg to better segment your 7T images, but note that you would surely loose the "contrast-agnostic" property. I think a reasonable amount of data would be 10 to 20 scans. The good news is that you can initialise your manual segmentations with the outputs from SynthSeg, so hopefully that wouldn't be too much work.
Sorry, I haven't read your paper thoroughly, but from what I understood, all training synthetic images are generated from ground truth, real-life images, aren't they?
If that's the case, wouldn't it make sense to apply your data augmentation routines to ground truth 7T anatomies as well? That way we could try to make sure that some high resolution synthetic images are fed to the model as well during training.
I guess this could only add signal to the underlying distribution you are trying to fit with this model, don't you think? :)
Also, I was thinking: maybe it does make sense to be less conservative about where gray matter spans, just like your model does.
Indeed, missing gray matter voxels could be detrimental to finding meaningful signal in BOLD studies, while having too many of them (because the model mistook some CSF voxels for gray matter voxels) adds noise, which is probably less of a problem.
I guess it's comparable to a classical precision/recall tradeoff, and my point is that recall might be too high for gray matter for synthseg 2.0, but I don't know yet how much of an issue this is down the line.
However, while this tradeoff is probably ok for people working with volumetric images, I can imagine that this will be an issue for neuroscientists who first project their volumetric images on meshes and then run their analyses. I think the amount of noise generated by a loose segmentation will be much more damaging to the signal they're trying to capture, as the projection step will mix signal from CSF voxels with that of gray matter voxels
Our model doesn't use any real image fro training. It only uses segmentation maps from which we create synthetic data of random contrast by sampling from a Gaussian mixture model of randomised parameters. Similarly, we also randomise the simulated resolution of each trainig example. As a result, the employed network learns features that are independent of the contrast and resolution, and thus can be applied to brain scans of any contrast and resolution at test time.
So you can fine-tune the model on your 7T data, but doing so will probably loose this contrast-robustness property.
Also, it's not us that are more/less conservative about the segmented regions. This is entirely on the network. The trained CNN produces soft segmentation probability maps of each label, and we then simply apply an argmax operation to obtain the hard segmentation. Hence, I don't have much control over the network's results about how "conservative" it is for the grey/white matter boundary. Note that the smoothness mainly comes from the low-resolution training data. SynthSeg can robustly segment scans with resolution between 1 and 9 mm. Even if SynthSeg can also segment scans at higher resolution, this was not its first intended use. Fine-tuning on 7T data will certainly help you in that regard.