Question on EquivariantLayerNorm and Noise AccumulatedNormalization
Closed this issue · 1 comments
Hi, thanks for your impressive work. Here I have two questions about the details in the pretrained denoising process:
- The
EquivariantLayerNorm
. By running thetest_scalar_invariance
in thetests/test_equivariance.py
and setlayernorm_on_vec: whitened
, I found that the model is not equivariant. The problem exists in the addition of regularized matrixself.eps * reg_matrix
. The model would be equivariant if I remove this part. - The
Noise AccumulatedNormalization
. In your implementation, the noises are accumulated through all batches and normalized. Then they are treated as the supervised signal to compute the loss. I guess such an operation (AccumulatedNormalization) is conducted to ensure themean=0, std=1
afternoise_scale.
But you can directly setnoise = torch.randn_like(data.pos)
,data.pos_target = noise
anddata.pos = data.pos + noise * self.hparams['position_noise_scale']
without theAccumulatedNormalization
. Could I ask the reason? Thanks.
Hi, thanks for your questions!
Regarding EquivariantLayerNorm
, the additional self.eps * reg_matrix
term is essentially a precaution there to avoid inverting an ill-conditioned matrix. Ideally, self.eps
would be small enough to not affect the equivariance. Without that, I noticed a bad covar
matrix occasionally disturbed training. Feel free to play around with it, and let me know if you have other ideas!
Regarding AccumulatedNormalization
, indeed you can also do it directly to have normalized noise targets. We inherited this from the implementation of denoising for noisy nodes, where sometimes, it's easier to have an automatic normalizer instead of manually preprocessing the noise to have unit variance.
Hope that helps!