twhui/LiteFlowNet

Charbonnier loss in your paper

adhara123007 opened this issue · 1 comments

Dear Sir,

I was going through your paper. There is a statement saying :
We also fine-tuned LiteFlowNet on a mixture of Sintel clean and final training data (LiteFlowNet-ft) using the generalized Charbonnier loss.
I am little bit confused.
When I look at the default caffe parameters.

// Message that stores parameters used by L1LossLayer
message L1LossParameter {
optional bool l2_per_location = 1 [default = false];
optional bool l2_prescale_by_channels = 2 [default = false]; // Old style
optional bool normalize_by_num_entries = 3 [default = false]; // if we want to normalize not by batch size, but by the number of non-NaN entries
optional float epsilon = 4 [default = 1e-2]; // constant for smoothing near zero
optional float plateau = 3001 [default = 0]; // L1 Errors smaller than plateau-value will result in zero loss and no gradient
optional float power = 5 [default = 0.5]; // for robust loss, power < 0.5 => non-convex
}

The loss function always seems to be a charbonnier loss
alpha = 1 and epsilon^2 = 1E-2.

Did you different parameters when you explicitly mention about Charbonnier loss?

twhui commented

The use of generalized Charbonnier loss is presented in README.