ELEKTRONN/elektronn3

Input normalization

SebastienTs opened this issue · 0 comments

I wonder why input normalization has so much impact on the training of UNets when the first batch normalization is actually performed very early in the network (and should play a similar role).

I'm mostly asking because making input normalization dispensable could avoid the downsides of each strategy, especially for fluorescence microscopy for which absolute intensity often conveys an important source of information.