mazurowski-lab/segmentation-guided-diffusion

Questions about works, and datasets

Closed this issue · 5 comments

Thanks for your interesting works!
i have two questions about your work.

  1. where can i find vessel mask in breast?, i downloaded your synthetic dataset, and i only could see region mask of breast.
  2. if i want to train using multi class, how can i give input images type to model? In your insturction, only image folder way descripted.
    i think that it only can be possible single class(1 channel mask image).

Additional, Is it possible to generate new data accurately according to the segmentation mask?
Or does the mask only serve as a reference in the generative reasoning process?
I am curious about this part because it seems to be created in the same position according to the mask while checking after learning, and some are not.

Sorry for interupt your time. Thanks!

Hello,
Thank you for your questions! I'm happy to help. I'll try to answer one at a time.

  1. I just confirmed that the dataset's segmentations (in the segmentations folder in https://drive.google.com/file/d/1yaLLdzMhAjWUEzdkTa5FjbX3d7fKvQxM/view) have all breast, dense tissue and blood vessel segmentations for the corresponding real images. However, please note that some slice images that have breast segmentations do not have any dense tissue and/or blood vessels (the segmentations are not incomplete, those objects are just not in all slices). Maybe that's why you're not seeing any vessels?
  2. Yes, currently the model is designed to use multi-class segmentations with one channel, i.e., each class has a certain pixel value. For example, a breast MRI mask has possible values of 0,1,2,3 for background, breast, blood vessel and dense tissue, respectively. This was chosen rather than using one-hot encoded masks (the number of mask channels is the number of classes) so that the input size of the model does not increase with the number of classes. I'll update the readme to better explain how to use this input mask format!
  3. I'm sorry, I don't understand your last question. Could you go into more detail with your question?

thanks to fast reply from my question!
i understood your description.
On question 3, The direction of inference I'm expecting is whether i can create a new style depending on the part of the mask when i give a mask with the desired shape as an input value (Ex. figure2 in your paper input mask ane Seg-Diff output Pair)
if possible, can i get some train parameters?

Thanks and sorry to ask a lot

Hi, I'm still not sure that I get your question, but to try to answer: once the model is trained, it will generate an image that accurately follows an input mask, even if the input mask is totally new/unseen in training (which is why we evaluate on masks from the test set, e.g. in figure 2).

Check out the pretrained model parameters here: https://github.com/mazurowski-lab/segmentation-guided-diffusion?tab=readme-ov-file#2b-train-your-own-models.

Did this answer your question?

Thanks for kindly reply!! i understood about that.
if i have more questions after, i will ask!!!

Thank you~~