how about using size constraint in fully supervised setting?
Closed this issue · 2 comments
JunMa11 commented
Dear @HKervadec ,
Thanks for sharing the code. Really nice work.
Have you tried impose the size constraint in fully supervised segmentation?
In other words, the loss will be CrossEntropy + \lambda C(Vs).
Best,
Jun
HKervadec commented
Hey,
We haven't tried yet, but the precise constraints setting can give a
rough idea of what to expect. Several interesting aspects of imposing
constraints with fully labeled images:
* Can alleviate the uncertainty in the labels ; experts don't fully
agree on the boundary of the object, often ending up with a DSC of 0.9
between them. However, I they would agree on the size
* Can compute higher-orders functions (such [as
centroid](https://arxiv.org/abs/1904.04205), and many more) based on
those label, and then constraint them as a different way of supervising
the training. It is not clear yet how it would fare against pixel-wise
supervision, but my personal take is that it could help with
generalization power and reduce overfitting
* Can simply act as a regularizer
This is really easy to try with the current code ; you only need to
change either the loss function or its parameters (`idc` from `[1]` to
`[0, 1]` for instance).
JunMa11 commented
Hi Hoel,
Thanks for your reply and valuable insights very much.
I will try it and get back to you the results.
Best,
Jun