Strong tiling artifacts
Closed this issue · 3 comments
When I run the network prediction, the result has strong tiling artifacts. I use quite big halo, but it doesn't help. I encountered this before when using U-Net that wasn't trained for long enough or didn't see enough ground truth, which is kind of the case when applying a pretrained network to different data. As a workaround, would be nice to have an option for smooth transition between tiles, like in the original U-Net publication, where within the halo the weight of the tile in the result falls linearly towards the edge of the tile.
Furthermore, it propagates to the superpixels (here GASP run on the wrong channel, but still you can see clearly the squares)
Fixing #205 doesn't help, i.e. changing size of halo doesn't improve prediction, which indicates more serious problems. Thus #220 and wolny/pytorch-3dunet#113
Progress:
- Halo implementation is fixed, which already greatly improved the prediction (wolny/pytorch-3dunet#113 (comment))
- Intensity normalisation doesn't cause problem at least for ovules (training data) and mouse (another dataset).
- Batch norm fix hallucinations created by group norm in mouse embryo (I mean you need to train with batch norm again).