magic-research/Dataset_Quantization

Embedding calculation

VityaVitalich opened this issue · 1 comments

Dear maintainers, i have found your paper really interesting and would like to replicate the results with different embedding function f, that was mentioned in the paper.

However, i have faced such a piece of code below, found in submodular.py. I struggle to follow the logic behind such embedding construction and have not found any mention of such procedure in paper text. Could you please explain the need in that procedure and some intuition behind that?

bias_parameters_grads = torch.autograd.grad(loss, outputs)[0]
weight_parameters_grads = self.model.embedding_recorder.embedding.view(batch_num, 1,
                        self.embedding_dim).repeat(1, self.args.num_classes, 1) *\
                        bias_parameters_grads.view(batch_num, self.args.num_classes,
                        1).repeat(1, 1, self.embedding_dim)

gradients.append(torch.cat([bias_parameters_grads, weight_parameters_grads.flatten(1)],
                            dim=1).cpu().numpy())```

Thanks for your interest in this work!

This part is directly borrowed from Deepcore. I understand that the operation enhances the information contained in the embeddings.