CBS work not well
Opened this issue · 3 comments
Hi, I'm trying to test CBS to fit my task. I create the yaml file as follows for testing:
agents:
- goal: [2, 0]
name: agent0
start: [1, 3] - goal: [3, 1]
name: agent1
start: [0, 2] - goal: [1, 0]
name: agent2
start: [2, 3]
map:
dimensions: [4, 4]
obstacles:- !!python/tuple [0, 0]
- !!python/tuple [0, 3]
- !!python/tuple [3, 0]
- !!python/tuple [3, 3]
When I run the code as prompted by the READM.md file, I find that every time I input the above yaml file, the output result (that is, the solution of each agent in the output.yaml file) will be different. This difference will result in a difference in the depth of the constraint tree, so that the runtime of the program will be different. I want to ask you how to make CBS output a stable and unique solution every time it runs? Thank you for taking the time to check my feedback during your busy schedule. I look forward to your responses and suggestions.
@Aceyli I suppose that's a bug in the code. Unfortunately, I don't have the time to look into this at the moment. The CBS algorithm I've implemented is based on this publication.
Please feel free to make a PR in case you were able to figure out what's going wrong
@Aceyli @atb033 The issue described above is not a bug. There is a non-determinism in Astar and CBS (due to the use of open-set) which provides different answers if there are more than one answer possible. If a agent can have more than one path from its initial location to its assigned goal location, this code can return any of the all possible answer.
If we remove this non-determinism (by converting open set to a open list), the efficiency (in runtime) of Astar in finding the path, and the efficiency of CBS (in runtime) will suffer too much.
Please let me know in case we are not on the same page.
Hi, I also have a problem with the CBS: in this scenario the two robots start from the same side but the positions of the two goals are in reverse order so they should use the "corridor" at the top to swap their positions.
Unfortunately by running the code the two robots seem to enter an infinite loop and not find a solution but from the article we know that the algorithm is complete so if there is a solution it should be able to find it.
I report the code of my file input.yaml
:
`agents:
- goal: !!python/tuple
- 14
- 0
name: agent0
start: !!python/tuple - 0
- 0
- goal: !!python/tuple
- 12
- 0
name: agent1
start: !!python/tuple - 2
- 0
map:
dimensions: - 16
- 2
obstacles: - !!python/tuple
- 0
- 1
- !!python/tuple
- 1
- 1
- !!python/tuple
- 2
- 1
- !!python/tuple
- 3
- 1
- !!python/tuple
- 5
- 1
- !!python/tuple
- 6
- 1
- !!python/tuple
- 7
- 1
- !!python/tuple
- 8
- 1
- !!python/tuple
- 9
- 1
- !!python/tuple
- 10
- 1
- !!python/tuple
- 11
- 1
- !!python/tuple
- 12
- 1
- !!python/tuple
- 13
- 1
- !!python/tuple
- 14
- 1
- !!python/tuple
- 15
- 1`