Could we train Implicit Sequence Models in a multi-label scenario?
JoaoLages opened this issue · 4 comments
instead of having only one single label (item) per position, have many instead. I dont know what losses we could use.
Are you trying to model a scenario where a user buys two or more items at the same time? You are right that this isn't supported here in a straightforward way.
I'm not sure the intent of the original question but this is something I am also attempting to do.
I think about my sequences like this
T1(p1,p2), T2(p1,p2,p3) where T is a transaction and p is a product. The transactions are ordered but the products are not ordered within a transaction.
The evaluation is easy enough ,the default Len(intersect)/Len(predictions)/Len(intersect)/Len(targets) slash F1 works just fine, just use T1 ... T1+i to predict Tn. You have to modify the evaluation function to separate by transaction instead of by masking the last k products and deal with potential repeat purchases within a transaction.
As far as the model goes however I'm not certain the current implementation works for semi sequential data. Do you have any thoughts about a way to extend the current model to handle a scenario like this?
Thank you for all the hard work on this amazing package.
Come to think of it it may be possible to do some form of sub-sorting within transactions. My idea is to impose order in the training set. My initial thought is sorting within a transaction by the value of that product within the transaction (utility) so that value is always ascending within a transaction. Then you could manually assign custom timestamps to impose this order. This should work because the order of my prediction sequence isn't important.
@maciejkula yeah, that's it. Each time step i
could have more than one item associated with it. This would be very useful in cases where 'the order of the items is irrelevant'.