some simple questions about released code
Closed this issue · 3 comments
Hi, thanks very much for submitting the good work. I have some simple questions about code in train_dance.py:
-
in 188-189 lines : ### We do not use memory features present in mini-batch
feat_mat[:, index_t] = -1 / conf.model.temp
I know the meaning is calculating the similarity between min-batch and min-batch with current features not memory features, but what's the meaning about -1 / conf.model.temp? -
in 195-196 lines: loss_nc = conf.train.eta * entropy(torch.cat([out_t, feat_mat, feat_mat2], 1))
I can't understand what's the effect of direct connection of feat_mat and feat_mat2. why not put the feat_mat2 into the proper index position in feat_mat, as we know, the index of feat_t in different iteration is not same.
Thanks very much and hope to get your reply
Hi, thanks for your interest in our work.
- We just tried to fill in small values at the index_t. The value, -1 / conf.model.temp, will be a minimum of the scaled similarity.
- As you mention, "put the feat_mat2 into the proper index position in feat_mat" is one correct way of implementation. We just tried to implement that part simply and concatenate the matrix of feat_mat (deactivate mini-batch indexes) and feat_mat2 (similarity within mini-batch).
Thanks for your reply. And i still have a very fool question. What is the difference between train_dance.py and train_class_inc_dance.py ? I’m a rookie in this field.:sweat_smile:
train_class_inc_dance.py is a script used for class incremental DA experiment (Table 5 in the paper).