Optimizing hyperparameters
Opened this issue · 0 comments
Hi SUPPORT,
I have used your system extensively on a number of volumetric datasets and I am very pleased with the result. However, I would still like to see if I can improve the denoising. Obviously, some parameters, such as the blind spot size, are VERY dependent on the nature of the data, but I was wondering if the default values for the capacity given by channel sizes, the depth and the batch size were a tradeoff between performance and training/inference time or if they actually represent an approximate optimum for performance in the face of overfitting etc. This would be for large volumetric datasets with a size of lets say (1500,1500,10000).
If you would prefer, we can communicate by email as well, I just thought any potential answers could be useful to others.
Thank you for your time and this wonderful tool.