The paper mentions three augmentation strategies, but I only found two in the code.
Closed this issue · 2 comments
The paper mentions three data augmentation strategies: Flip Response, Drop Item, and Swap Adjacent Items. However, I could only find code for Flip Response :
s_flip[b, i] = 1 - s_flip[b, i]
and Swap Adjacent Items:
q_[b, i], q_[b, i + 1] = q_[b, i + 1], q_[b, i]
s_[b, i], s_[b, i + 1] = s_[b, i + 1], s_[b, i].
I couldn't locate code for Drop Item. Can you provide information to help me find the code or provide the core code as in the previous examples?
Drop Item strategy is achieved by applying random mask here. Pos/neg sequences will be assigned different masking to achieve the effect of the contrastive drop operation.
DTransformer/DTransformer/model.py
Lines 320 to 333 in 479175b
Thank you very much!