zplizzi/tensorflow-fast-rcnn

Changing kernel_width/height computations to floating-point

menglin0320 opened this issue · 1 comments

If you read the roi-pooling layer on here you can see that their code can handel the case if width and height of input kernel is smaller than the output.

because you wrote
int kernel_width = roi_width / pooling_width;
int kernel_height = roi_height / pooling_height; in this way , you can't handle any pooled_size not divide by input_size.
I modified your code a little bit and the modified code is here
I don't know backpropagate well, do I also have to modify that to accompany my change in your script?

@menglin0320 good catch. I knew it would not handle the case where the ROI is smaller than the output (as evidenced by the comment in the code), but that situation should generally be avoided anyways (I avoid it by filtering out any ROIs that are too small). However, changing the result of the division to a floating-point representation would result in a bit less position error than how code currently stands for larger ROIs.

The gradient code should not require any changes.

Feel free to submit a PR with your fixes.