MPI-IS/bilateralNN

how to implement your colour upsampling experiment

Closed this issue · 3 comments

As titled, how to implement your colour upsampling experiment? like generate the upsampled image, compute corresponding PSNR

The scripts for the joint bilateral upsampling experiments are according to old caffe and are not updated according to new permutohedral layer (which is present in our website). I will update and keep the new example scripts for upsampling soon when I find some time.

It is easy to do upsampling using ‘Permutohedral layer’. The below is the permutohedral layer prototxt for color upsampling. For your reference, I have attached the complete ‘deploy.prototxt’ for 8X color upsampling with this message. Depending on your dataset and upsampling task, you need to validate the bilateral feature scales.

The below layer does ‘Gauss’ bilateral upsampling. You can add this into your network and learn the filter parameters to do ‘learned’ bilateral upsampling.

layer {
name: “Upsample"
type: “Permutohedral"

bottom: "image_color_small" # Input blob (low-resolution colour image)
bottom: "bilateral_features_in" # Input features (low-resolution guidance grayscale image)
bottom: "bilateral_features_out" # Output features (high-resolution guidance grayscale image)

top: "image_upsampled" # Output filtered blob (high-resolution colour image)

param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
permutohedral_param {
num_output: 3 # Number of filter banks == dimension of the output signal.
group: 3 # Number of convolutional groups (default is 1).
neighborhood_size: 2 # Filter neighborhood size. In our later experiments on other tasks, we found that neighborhood_size:1 would work better and faster. You can try neighborhood_size:1 here.
bias_term: true
norm_type: AFTER
offset_type: DIAG # FULL (default): Full Gaussian Offset;
# DIAG: Diagonal Gaussian offset;
# NONE: No offset.
}
}

To get features for constructing permutohedral space:

layers {
name: "bilateral_features_out"
type: "PixelFeature"
bottom: "image_gray"
top: "bilateral_features_out"
pixel_feature_param {
type: POSITION_AND_RGB
pos_scale: 0.026 # You need to validate these scales depending on your task, dataset and filter neighborhood size.
color_scale: 0.167
}
}

layers {
name: "bilateral_features_in"
type: "Pooling"
bottom: "bilateral_features_out"

top: "bilateral_features_in"

pooling_param {
pool: MAX
kernel_size: 1
stride: 8
}
}

Let me know if something is not clear.

color_upsample.txt

@varunjampani, @raffienficiaud..

Hi, what about the Assamese character recognition example? Is the code available anywhere online?

Thanks

No, that example is not online. Do you have any questions or looking for any specific scripts (for preparing data) or prototxts?