broadinstitute/CellBender

Importance of model loss

ShadHOH opened this issue · 0 comments

Hi, thanks for a great tool

I'm curious as to what the significance of the model's loss. I've noticed that as you increase the low_count_threshold parameter, both test and training loss shoots up quite a bit. Currently I am working with low_count_threshold = 25 and experience losses in the realm of ~5000, whereas with default (and the examples I can find in the documentation and in the repo's issues) show a loss of around ~1800. Increasing the parameter further naturally increases the loss further

Based on previous issues submitted, loss does not appear to be a major concern, and in my case I only find that the results get better when using the parameter and ignoring the change in loss - better separation between real and empty cells, better learning curves, etc. Still, I am curious as to what the loss reflects, or more importantly, how it may influences the results if applicable