About CLIP RN-50
Strike1999 opened this issue · 4 comments
Thank you very much for your exciting work!
Would like to ask if you can provide the code for CLIP RN50 to generate CAMs?
I wish you good luck with your research and a happy life.
Thanks for your interest in our work.
This repo is mainly designed for CLIP-ViT. If you want to use RN50, you need to modify some lines as follows (take voc as example):
generate_cams_voc12.py
1. Comment out or remove line 134
CLIP-ES/generate_cams_voc12.py
Line 134
in
f0cecc7
CLIP-ES/generate_cams_voc12.py
Line 134 in f0cecc7
2. Replace image_features
with image
in line 144: input_tensor = [image, text_features_temp.to(device_id), h, w]
CLIP-ES/generate_cams_voc12.py
Line 144
in
f0cecc7
CLIP-ES/generate_cams_voc12.py
Line 144 in f0cecc7
3. Comment out or remove line 160 - line 197, line 202, line 207, line 213, uncomment line 212,
4. Change target_layers
in line 239: target_layers = [model.visual.layer4[-1]]
CLIP-ES/generate_cams_voc12.py
Line 239
in
f0cecc7
CLIP-ES/generate_cams_voc12.py
Line 239 in f0cecc7
5. Set reshape_transform
to None
in line 240: cam = GradCAM(model=model, target_layers=target_layers, reshape_transform=None)
CLIP-ES/generate_cams_voc12.py
Line 240
in
f0cecc7
CLIP-ES/generate_cams_voc12.py
Line 240 in f0cecc7
clip/model.py
1. Add a forward function for resnet50 in CLIP object, such as:
def forward_features_resnet50(self, image, text_features, h, w):
image_features = self.encode_image(image, h, w)
# normalized features
image_features = image_features / image_features.norm(dim=1, keepdim=True)
text_features = text_features / text_features.norm(dim=1, keepdim=True)
# cosine similarity as logits
logit_scale = self.logit_scale.exp()
logits_per_image = logit_scale * image_features @ text_features.t()
# shape = [global_batch_size, global_batch_size]
logits_per_image = logits_per_image.softmax(dim=-1)
attn_weight = None
return logits_per_image, attn_weight
pytorch_grad_cam/activations_and_gradients.py
1. Set self.height=H // 32, self.width=W // 32
in line 40-41.
CLIP-ES/pytorch_grad_cam/activations_and_gradients.py
Lines 40 to 41
in
f0cecc7
CLIP-ES/pytorch_grad_cam/activations_and_gradients.py
Lines 40 to 41 in f0cecc7
2. Change forward function to self.model.forward_features_resnet50(x[0], x[1], H, W)
in
感谢您耐心的回复!
我按照上述步骤改了代码,可以正常运行。但是我在 CLIP-RN50 产生的 CAM 的基础上加上了 CRF ,结果比较低。
python eval_cam_with_crf.py --eval_only
得到的结果
'Pixel Accuracy': 0.7503926354038674, 'Mean Accuracy': 0.14167843883095207, 'Frequency Weighted IoU': 0.5694430970966254, 'Mean IoU': 0.127361159247873
请问这是什么原因造成的?(用ViT-B/16产生的伪标签精度和论文一致)
You can try different bg thresholds (such as 8 or 9 indicated by the result of eval_cam.py
in
Line 111 in f0cecc7
Thanks! It works.