TeamHypersomnia/rectpack2D

Same input with different scale give inefficient result

zxhcho opened this issue · 12 comments

Hi Patryk!

I run into another problem now and can't find out why. Maybe you can give me some useful advice.

Screenshot from 2020-12-19 17-17-49

When Inputing three rectangles whose (w,h) = {{3,1}, {2,1}, {1,1}}, all your four method in example returns (w,h) = {1,6}, which is reasonable.

Screenshot from 2020-12-19 17-14-32

But when I just multiply these rectangles with 2, which means I input (w,h) = {{6,2}, {4,2}, {2,2}}. I get some results with (w,h) = {6,6}, which is clearly not the optimal situation.

Screenshot from 2020-12-19 17-15-49

If I set the factor to 4, I get a reasonable result again.

Screenshot from 2020-12-19 17-20-56

I'm curious why just simple scaling will influence the result. As you mentioned in your example, the priority of sort shouldn't change when I just simply multiply a factor.

Maybe you can give me some useful tips.

Thanks again!

Hey there again! What exactly is your discard_step set to? The algorithm always starts with square bins, so a result like 6x6 is expected to occur internally - however, the algorithm then proceeds to search for optimal rectangular bins. In case of such tiny bins like in your case, further optimization might be prematurely aborted because the width of the next bin candidate would be different by less than say 128 pixels. If you need such fine granularity you need to set the discard_step parameter to 1 or 2. If it doesn't help, let me know which sort functions exactly were used here and in which order.

Cheers!

Hey there again! What exactly is your discard_step set to? The algorithm always starts with square bins, so a result like 6x6 is expected to occur internally - however, the algorithm then proceeds to search for optimal rectangular bins. In case of such tiny bins like in your case, further optimization might be prematurely aborted because the width of the next bin candidate would be different by less than say 128 pixels. If you need such fine granularity you need to set the discard_step parameter to 1 or 2. If it doesn't help, let me know which sort functions exactly were used here and in which order.

Cheers!

Thanks for your reply, I just use 4 default methods listed in your ./example/main.cpp, and the discard_step was set to 1 by default. The figures I listed above shows outputs of four kinds of your example methods.

I call your API in the way your example was written:

Screenshot from 2020-12-19 21-12-31

As you can see, method 1 in your example was sorted:

by area, by perimeter, by bigger side, by width, by height and by "pathological multiplier"

When I scale these rectangles, the relative order keeps the same, but the result changes, which made me confused.

Hmm... now that's really odd. I might've screwed up something badly in the implementation of discard_step as it still looks to me like the algorithm is ending its search prematurely for some reason. I think that the passed orderings are scale-invariant so these should be fine

Hmm... now that's really odd. I might've screwed up something badly in the implementation of discard_step as it still looks to me like the algorithm is ending its search prematurely for some reason. I think that the passed orderings are scale-invariant so these should be fine

It's OK. We temporarily deal with this problem by calling all your 4 methods listed in the example and choose the best performance.

By the way, we denote the packing performance by {sum of rectangles areas} / {result_size.w * result_size.h}.

Yeah, that way of measuring performance is totally correct. I'll look into this issue in a bit. So you're saying that other methods don't suffer from this phenomenon?

Yeah, that way of measuring performance is totally correct. I'll look into this issue in a bit. So you're saying that other methods don't suffer from this phenomenon?

Yes, you can refer to the third picture in this issue. There are still two method works properly:

They are the second and the fourth method listed in your example code.

Okay, so it looks like it might have to do something with flipping the rectangles, since when I disable flipping, the results become perfectly scale-invariant. E.g. with factor=1 it's 3x2, for 2 it's 6x4, for 3 it's 9x6 etc. I'm still looking.

At the very least, the reason it is different across methods is also because method 2 specifies a unique my_custom_order_2 - which does not appear anywhere else and it puts the "less" pathological rectangles first (as opposed to prioritizing the "more" pathological ones).

I think I have just found a solution - although not an explanation

Take a look at the latest commit. On my side, the results are now scale-invariant in your particular corner case. Decrease discard_step further down the negative values accordingly if needed. I am still trying to find an explanation for this weird behaviour but this should at the very least solve your problem.

Basically, if you now give a negative discard_step, the algorithm tries more aggressively to optimize bin dimensions down to the single pixels of width and height so it might be a tiny bit slower but should now return near-optimal packings for these tiny rectangle sets.

Take a look at the latest commit. On my side, the results are now scale-invariant in your particular corner case. Decrease discard_step further down the negative values accordingly if needed. I am still trying to find an explanation for this weird behaviour but this should at the very least solve your problem.

Basically, if you now give a negative discard_step, the algorithm tries more aggressively to optimize bin dimensions down to the single pixels of width and height so it might be a tiny bit slower but should now return near-optimal packings for these tiny rectangle sets.

wow! It really works.

setting discard_step as negative is an awesome ideal!

Yeah, previously the algorithm didn't EVER try to optimize the bin sizes using 1 pixels of difference even with discard_step = 1, so depending on whether the smallest viable container size had even or odd dimensions, the search could exit earlier or later, and the intermediate bin sizes for differently scaled rectangles could have different parity - e.g. 3/2 = 1, but 4/2 is still = 2 so it will search further for 4 but already not for 3.

In any case, I'm glad that it works for you now! ☺️ I wish your research turns out successful!