wpy1999/BAS

bas seems not to work

williamium3000 opened this issue · 3 comments

Great work!
I have tried to reproduce the paper. Experiment reaches expectation on baseline(using foreground, AC and CLS loss). However, after adding the BAS loss, regression drops to 0. I have tried various version of BAS including detach background map and so on. Also the description does not match with the demo graph. Could you please share some details of how to train with BAS Loss. Thanks

Thanks for your attention. With regard to the problems you have experienced in the reproduction, we think they are due to the following reasons: Although the AMC module includes two f2 subnetworks with shared weights, the f2 subnetwork with Fbg as input obtains the weights by deepcopy , instead of calling self.conv5_1(2,3) directly. As shown in Sec.3.2 “For the sub-network with Fbg as input, the goal is to generate the background activation value by the same function, so that this sub-network parameter is frozen in back propagation”. The code for this part is attached at the end of this reply.
In order to avoid other possible problems in the reproduction process, there are two points need to be noted: 1. We use detach operation in the denominator of the BAS loss (Eq3), as shown in Figure 2. 2. When the L_BAS is larger than 1, we mark it as a constant (to ensure stability at the beginning of training), which is shown in Sec.4(Implementation Details). We also attach this part of the code to the end of this reply.
I hope this is helpful to you.

code 1

def forward(self, x, label=None,N=1):
        conv_copy_5_1 = copy.deepcopy(self.conv5_1)
        relu_copy_5_1 = copy.deepcopy(self.relu5_1)
        conv_copy_5_2 = copy.deepcopy(self.conv5_2)
        relu_copy_5_2 = copy.deepcopy(self.relu5_2)
        conv_copy_5_3 = copy.deepcopy(self.conv5_3)
        relu_copy_5_3 = copy.deepcopy(self.relu5_3)
        classifier_cls_copy = copy.deepcopy(self.classifier_cls)
        ## The next segment is omitted
## erase 
        x_erase = x_4.detach() * ( 1 - x_saliency) 
        x_erase = self.pool4(x_erase)
        x_erase = conv_copy_5_1(x_erase)
        x_erase = relu_copy_5_1(x_erase)
        x_erase = conv_copy_5_2(x_erase)
        x_erase = relu_copy_5_2(x_erase)
        x_erase = conv_copy_5_3(x_erase)
        x_erase = relu_copy_5_3(x_erase)
        x_erase = classifier_cls_copy(x_erase)
        x_erase = self.avg_pool(x_erase).view(x_erase.size(0), -1)
## x_erase_sum
        self.x_erase_sum = torch.zeros(batch).cuda()
        for i in range(batch):
            self.x_erase_sum[i] = x_erase[i][label[i]]

code 2

def bas_loss(self):
        batch = self.x_sum.size(0)
        x_sum = self.x_sum.clone().detach()
        x_res = self.x_erase_sum
        res = x_res / (x_sum+1e-8)
        res[x_res>=x_sum] = 0 ## or 1

        x_saliency = self.x_saliency
        x_saliency = x_saliency.clone().view(batch, -1)
        x_saliency = x_saliency.mean(1)  

        loss = res + x_saliency * 0.7
        loss = loss.mean(0) 
        return loss

Thanks i have been able to reached 85 gt loc and 70 cls top1. I am wonder if the training settings is avaible. For eg, the lr and opitimizer and so on. Thanks so much!

To be honest, I am hard to believe this accuracy. Because BAS uses detach and deepcopy in the AMC module, the classification accuracy is not affected during the training process. In fact, the classification accuracy of BAS is quite high. But the classification accuracy of your reproduction is only 70, which is even much lower than CAM (76). I think there may be problems with your reproduction code. If possible, you can send your reproduced code to me at wpy364755620@mail.ustc.edu.cn
We use SGD optimizer in the training process. The parameters are as follows: LR = 0.001, weight_ decay=5e-4, momentum=0.9, epoch=100, decay_epoch=80, decay_rate=0.1。