Code
Opened this issue · 4 comments
Hello, I think your work is great. I don’t know if there is any intention to publish the code.
The TOPIC code will be released after 1-2 months' commercial frozen period. You may watch this repository for any notification.
Thanks! I have a little doubt here. The ‘Then, we finetune the model Θ(t) on each subsequent training set D(t), t > 1 for 100 epochs’ mentioned in the experimental part refers to training the same N-way K-shot samples for 100 epochs and then testing? Or is it different batches of samples which is similar to N-way 100K-shot?
@zhukaii The same N-way K-shot samples, since we only have N*K samples onsite. In our raw implementation, we simply pack them in a single minibatch and train the embedding with softmax loss for 100 epochs/iterations. Alternatively, you may use a metric learning loss or a nearest-class-mean classifier to boost the performance.
Thanks again!