rounding errors fixed?
mepster opened this issue · 3 comments
Thanks for making this library!
In the README there is a comment:
Update: John St. John did some work and found that the enformer-official-rough model hits the reported marks in the paper - human pearson R of 0.625 for validation, and 0.65 for test.
Does this mean that the earlier concern below has been fixed?
There are still some rounding errors that seem to be accruing across the layers, resulting in an absolute error as high as 0.5. However, correlation coefficient look good so I am releasing the 'rough'ly working version. Will keep working on figuring out where the numerical errors are happening (it may be the attention pooling module, as I noticed the attention logits are pretty high).
Thank you! It's not 100% clear if we can go ahead and use the library without worrying about the rounding errors.
It isn't fixed unfortunately
However, the Pearson R looked ok on validation dataset, and I was able to fine-tune it for my usecases
@mepster in private correspondences, the model is working fine, both for inference and fine tuning
i'm not going to bother making it perfect, as this was the project that made me vow never to touch tensorflow ever again