KumapowerLIU/Rethinking-Inpainting-MEDFE

About visualizing the feature map

WanderingMeteor opened this issue · 3 comments

Hello! Thanks for your work!
I have a question about visualizing the feature map.
In the paper, Figure. 4 shows the visualization of feature maps of different layers, you said that you used a 1×1 convolutional layer to map high dimension feature maps to the color images, I wonder:

  1. How to obtain the weights of the 1×1 convolutional layer?
  2. Did you first do the 1×1 conv on the feature map, and then upsample it to the input size (256×256)?

This is a good question. Someone emailed me the same question. We use the (nn.conv2d,nn.instancenorm.nn.tanh) to map the cnn feature maps to color images, specially, we train the (nn.conv2d, nn.tanh) by calculate the distance between the original image and the output(color images). For fairness, the training configurations of each CNN featuremaps (each 1 × 1 convolutional layer) are same. Maybe you can visiual the cnn features in inpainting tasks in a better way, if you find a better way, please share with me, thank you.

Okay,thank you very much!

I still have some confusions about the training of the visualization(nn.conv2d,nn.instancenorm.nn.tanh) modules. Did you train them while keeping the main inpainting network frozen or just train the vis and main parts jointly?