Small question about DeepAtlas
mikami520 opened this issue · 0 comments
I am now working on the DeepAtlas
tutorial and have a quick question about the warp
and warp_nearest
in the training. Since in the tutorial, you use the alternate training mode to save memory, which means one network will be frozen while the other one is active. In the segmentation network trail, the comment (shown below) said that we could use differentiable warp()
instead of warp_nearest()
to achieve the joint training, but this case, the registration network
is frozen, how could we do the backpropagation
and gradient descent
for registration network
based on this learnable warp()
?
In my opinion, we should not freeze the registration network
this time if we want to use warp()
to train the registration jointly.
Hope someone could help me and thank you in advanced!