amazon-science/prompt-pretraining
Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"
PythonApache-2.0
Issues
- 5
POMP weights for other CLIP variants
#13 opened by 100rab-S - 1
Are the Are there pomp weights for the ViT-B/32 backbone variants of the CLIP model now?
#17 opened by taoxinlily - 2
How to download ImageNet21K?
#16 opened by taoxinlily - 0
sample K-1 negative classes
#15 opened by YaoShunyu19 - 1
- 0
thanks for your great work! btw how should I apply this innovation to downstream tasks?
#12 opened by Xiloy - 8
Which version of ImageNet21K should I download
#10 opened by zhszysrh - 3
Cannot get Object365 classname
#9 opened by zhszysrh - 2
- 3
Inference demo for detection
#7 opened by zhszysrh - 2
- 2
Pre-training model on ImageNet-21K
#4 opened by Jiaxzhao - 4