MPI-IS/bilateralNN

Question: Why does scaled_back_data shared across samples within a batch?

Closed this issue · 1 comments

hzxie commented

Dtype* scaled_back_data = scaled_back.mutable_gpu_data();
const int in_size = in_height_ * in_width_;
const int out_size = out_height_ * out_width_;
for (int n = 0; n < num_; ++n) {
BlurOperation& op = operations_[n];
const Dtype* norm_there_data = op.norm_there_->gpu_data();
const Dtype* norm_back_data = op.norm_back_->gpu_data();
for (int c = 0; c < num_output_; ++c) {
caffe_gpu_mul(out_size,
top_diff + top_blob.offset(n, c), norm_back_data,
scaled_back_data + scaled_back.offset(0, c));

According to the code above, the scaled_back_data is shared across samples within a batch?
Could you tell me the reason?

I think the scaled_back_data should be decalred within the for loop.

hzxie commented

Because the variable is used to save the result for the current sample.