linyq2117/CLIP-ES

About CLIP RN-50

Strike1999 opened this issue · 4 comments

Thank you very much for your exciting work!

Would like to ask if you can provide the code for CLIP RN50 to generate CAMs?

I wish you good luck with your research and a happy life.

Thanks for your interest in our work.

This repo is mainly designed for CLIP-ViT. If you want to use RN50, you need to modify some lines as follows (take voc as example):

generate_cams_voc12.py

1. Comment out or remove line 134
image_features, attn_weight_list = model.encode_image(image, h, w)

2. Replace image_features with image in line 144: input_tensor = [image, text_features_temp.to(device_id), h, w]
input_tensor = [image_features, text_features_temp.to(device_id), h, w]

3. Comment out or remove line 160 - line 197, line 202, line 207, line 213, uncomment line 212,

4. Change target_layers in line 239: target_layers = [model.visual.layer4[-1]]
target_layers = [model.visual.transformer.resblocks[-1].ln_1]

5. Set reshape_transform to None in line 240: cam = GradCAM(model=model, target_layers=target_layers, reshape_transform=None)
cam = GradCAM(model=model, target_layers=target_layers, reshape_transform=reshape_transform)

clip/model.py

1. Add a forward function for resnet50 in CLIP object, such as:

def forward_features_resnet50(self, image, text_features, h, w):
        image_features = self.encode_image(image, h, w)
        # normalized features
        image_features = image_features / image_features.norm(dim=1, keepdim=True)
        text_features = text_features / text_features.norm(dim=1, keepdim=True)
        # cosine similarity as logits
        logit_scale = self.logit_scale.exp()
        logits_per_image = logit_scale * image_features @ text_features.t()

        # shape = [global_batch_size, global_batch_size]
        logits_per_image = logits_per_image.softmax(dim=-1)
        attn_weight = None

        return logits_per_image, attn_weight

pytorch_grad_cam/activations_and_gradients.py

1. Set self.height=H // 32, self.width=W // 32 in line 40-41.
self.height = H // 16
self.width = W // 16

2. Change forward function to self.model.forward_features_resnet50(x[0], x[1], H, W) in
return self.model.forward_last_layer(x[0], x[1])

感谢您耐心的回复!

我按照上述步骤改了代码,可以正常运行。但是我在 CLIP-RN50 产生的 CAM 的基础上加上了 CRF ,结果比较低。
python eval_cam_with_crf.py --eval_only
得到的结果
'Pixel Accuracy': 0.7503926354038674, 'Mean Accuracy': 0.14167843883095207, 'Frequency Weighted IoU': 0.5694430970966254, 'Mean IoU': 0.127361159247873
请问这是什么原因造成的?(用ViT-B/16产生的伪标签精度和论文一致)

You can try different bg thresholds (such as 8 or 9 indicated by the result of eval_cam.py in

bg_score = np.power(1 - np.max(cams, axis=0, keepdims=True), 1)
We set it to 1 because it performs well on CLIP-ViT. For RN50, the best threshold may differ.

Thanks! It works.