Fine-Tuning Strategy for Single-Class Lung Cancer Dataset in MultiTalent
Closed this issue · 2 comments
I'm currently working with a lung cancer dataset using MultiTalent. I have obtained inference results and am now planning to fine-tune the model to assess potential improvements. However, my dataset has only one class, which already exists in the pretrained model. I'm seeking guidance on the fine-tuning strategy:
- Should I treat the single target class as a new class during fine-tuning, considering it's from a different dataset?
- Alternatively, is it advisable to assign the same class label as in the pretrained model (corresponds to 06_lungnodule)?
I am also concerned if fine-tuning MultiTalent (Task_100) on a single-target class dataset restrict the model's output to just one class prediction? Specifically, I am unsure if the number of segmentation heads correspond to new target dataset classes only, or if they encompass all pretrained classes along with new dataset classes.
I will greatly appreciate your input. Thank you.
Hi,
the default finetuning strategy would disscards all old heads and uses a random init for the new head for the new dataset. Since the fine-tuning with the trainer nnUNetTrainerV2_warmupsegheads includes a 10 epoch warmup where only the head and not all parameters of the network were fine-tuned, I think you could just follow this default strategy.
Otherwise, you would need to manually map the state_dict keys corresponding to 06_lungnodule to the new heads. I imagine that this can be annoying...
I am also concerned if fine-tuning MultiTalent (Task_100) on a single-target class dataset restrict the model's output to just one class prediction?
Yes. The new model is only able to predict the classes from the dataset it was fine-tuned on. Its not a "continual" learning setting
Best
Constantin
Thank you again Constantin, for your quick response. I am finally able to start the fine-tuning on my dataset.