Embedding reset_parameters() implement wrong
chenjiasheng opened this issue · 3 comments
if hasattr(self, 'lora_A'):
# initialize A the same way as the default for nn.Linear and B to zero
nn.init.zeros_(self.lora_A)
nn.init.normal_(self.lora_B)
It seems there is a misuse of variables A and B here.
The comment implies that B is being initialized to zero. However, zeros_() is applied to lora_A, and normal_() is applied to lora_B.
I have the same question, however, see Edward's comment here
To my understanding, Edward resets the parameters of the weight matrix the same way as pytorch's Embedding.reset_parameters()
# the code taken from torch.nn.modules.sparse.Embedding
def reset_parameters(self) -> None:
init.normal_(self.weight)
self._fill_padding_idx_with_zero()
According to Edward's comment, it doesn't really matter which adaption matrix is set to 0, if we set both of the matrices to zeros.
However, I have a couple questions about resetting parameters in Linear
and Embedding
@edwardjhu:
-
Setting A to zeros we're essentially discarding all the knowledge encoded in the pretrained weights, doesn't it defeat the purpose of leveraging transfer learning?
-
If we set B to zeros and initialize A with Kaimin uniform distribution, or normal distribution, we might still be stuck in local minimum if A is filled with zeros? Did those initialized values come from experimental results?
Yes. It's important that we have either A or B set to zero so the first forward pass doesn't deviate from the original model. It's also important that A and B are not both zeros because the gradient would be forever zero in that case. So either set A to zero and B something else or vice versa.
Setting A to zeros we're essentially discarding all the knowledge encoded in the pretrained weights
Could you explain why?
If we set B to zeros and initialize A with Kaimin uniform distribution, or normal distribution, we might still be stuck in local minimum if A is filled with zeros?
We will get stuck if both A and B are zeros, but not if one of them is not all zeros.
Yes. It's important that we have either A or B set to zero so the first forward pass doesn't deviate from the original model. It's also important that A and B are not both zeros because the gradient would be forever zero in that case. So either set A to zero and B something else or vice versa.
Setting A to zeros we're essentially discarding all the knowledge encoded in the pretrained weights
Could you explain why?
If we set B to zeros and initialize A with Kaimin uniform distribution, or normal distribution, we might still be stuck in local minimum if A is filled with zeros?
We will get stuck if both A and B are zeros, but not if one of them is not all zeros.
Thanks for your quick response. It sounds to me that setting one of them to be zeros is critical to "preserve" the pretrained weights for the first pass. However, if A is initialized with zeros, the results of B @ A would be zeros, and the output from the previous layer multiples the B @ A would be zeros, too? Or did I misunderstand the implementation? (I admit I haven't run the code so I didn't check whether it's the case at runtime)