gimpong/AAAI22-MeCoQ

A Question about diversity of codebook

Closed this issue · 7 comments

Dear authors, I'm looking into the proposed paper, and I have a question about the loss term which encourages codebook diversity. I notice that the green curve in figure 2b remains to be above zero all the time. And I assume this loss term pushes codes in a codebook to be as orthogonal to each other as possible and therefore should be minimized to 0 (pls correct me if I'm wrong).

My question is: do you have any operation which constrain the codes in a codebook to be vectors containing only non-negative values?

Appreciate your answer in advance:)

Hi @xy9485 ,

Thank you for your question! We did not explore operations to produce only non-negative vectors.

Similar to the commonly-used L2 regularization, the diversity regularization loss empirically keeps above 0 to balance with other learning objectives (e.g., contrastive learning) to reach an overall optimum.

Hi @gimpong
Thank you for your answer. But the commonly used l2 norm as regularization term is guaranteed to be non-negative, whereas the diversity regularization isn't, correct?
Or do you mean the codes in the codebook tend to remain non-negative empirically during training? That seems to be possible if the input features for VQ are from the previous layer with Relu activation

Yes, I think you are right. Maybe the ReLU leads to nonnegative codes in the codebook.

I also notice you used entropy of quantization as an additional regularization term, combining with the codebook diversity term, how much does it help empirically? or doesn't?

In my experiments, the entropy regularization term made little difference to the performance. It is OK to remove this term.

That's useful to know, thanks again for your responses:)

It's my pleasure. :)