Cost Problem
Opened this issue · 4 comments
Thank you for your library, it is very fast. But when I calculate this matrix the result seems like not true.
-625.000 2187.500 -156.25 1000000
-2500.000 1000000 -2500.00 -2500.0
-1015.625 - 1015.625 1000000 1000000
1000000 1000000 1000000 1000000
the answer is x = [2, 3, 1, 0] , y = [3, 2, 0, 1],
But the true result should be x = [0, 2, 1, 3].
The method I used is lapjv(extend_cost=False)
maybe it is not the reason. Actually 1000000 is a large num used to fill np.inf in my case.
For this matrix in which there is no negatives, but the assign result is also not correct.
[ 0.4 1.0 0.5 1000000
0.0 1000000 0.0 0.0
0.3166 0.3166 1000000 1000000
1000000 1000000 1000000 1000000 ]
By the way, if I use np.inf directly, the result is also wrong. But if using 100,000 to fill np.inf, the result is true.
It seems to be due to the constant LARGE = 1000000
defined here:
Line 4 in c2b6309
I was able to make the test pass with this change:
master...jvlmdr:issue19
However, I didn't remove LARGE
completely because I didn't take the time to understand how it is employed in lapmod. It seems to be exported to python.
The issue here is that for the extension to a square matrix to work, costs are expected to be less than the constant LARGE
.
You can preprocess the input cost as follows:
from lap import LARGE
cost[np.inf(cost)] = LARGE-1
Ideally lapjv/lapmod
should take an optional check_bounds
argument that would check the input cost and raise a verbose exception if there are any values greater or equal than LARGE
.