lucidrains/byol-pytorch

[BUG] in trainer.py

ClemensSchwarke opened this issue · 1 comments

I don't think you can set the gradients to zero right before stepping the optimizer.

self.optimizer.zero_grad()
self.optimizer.step()