ZhangYuanhan-AI/visual_prompt_retrieval

code for detection and colorization

Closed this issue · 8 comments

Hi, do you have any plan to release code for detection and colorization?

Already uploaded

Can you please provide some detailed instruction on how to run the code on these two tasks? Thanks!

Sorry to bother you again, I wonder if featextrater_det.py is used for unsupervised feature extraction and featextrater_det_cont.py is used after the SupCon model is trained?

Sorry to bother you again, I wonder if featextrater_det.py is used for unsupervised feature extraction and featextrater_det_cont.py is used after the SupCon model is trained?

Hi, what is the “ featextrater_det.py?”

https://github.com/ZhangYuanhan-AI/visual_prompt_retrieval/blob/det/tools/featextrater_det.py

Sorry to bother you again, I wonder if featextrater_det.py is used for unsupervised feature extraction and featextrater_det_cont.py is used after the SupCon model is trained?

Hi, what is the “ featextrater_det.py?”

Hi Yuanhan, I am trying to reproducing the colorization task now. I find that the origin MAE-VQGAN randomly samples from ImageNet validation set for both the support and query samples. You paper mentioned 'For all experiments, in-context examples come from the training set'.

As I can understand, a reasonable pipeline would be training the SupCon model using support-query pairs from training set, and test it with pairs from validation set. I wonder which is the true setting for this experiment.

I would appreciate it if you could help me with this problem. Thank you very much!

Hi Yuanhan, I am trying to reproducing the colorization task now. I find that the origin MAE-VQGAN randomly samples from ImageNet validation set for both the support and query samples. You paper mentioned 'For all experiments, in-context examples come from the training set'.

As I can understand, a reasonable pipeline would be training the SupCon model using support-query pairs from training set, and test it with pairs from validation set. I wonder which is the true setting for this experiment.

I would appreciate it if you could help me with this problem. Thank you very much!

Hi, support image comes from training set.

So in your case, I have to calculate a 1.3M(or perhaps the randomly chosen 50000)*50000 similarity matrix, pick top-50 for each test sample, then use the trained SupCon model to choose the best support sample. Is this right?