MCZhi/DIPP

Some questions in my experiment

Closed this issue · 4 comments

Hi authors, thank you for your open-source code, and i got some questions when i tried to reproduce the effect in the paper.
Premise is i preprocessed all raw data in /orig/scenario/training_20s and randomly selected 200000 as the training dataset and 24000 as the val dataset.Other parameters are consistent with the original paper.
QUESTIONS:
1、planner optimizer parameters:if we set max_iterations=50 step_size=0.2 in open loop test, while we set max_iterations=2, step_size=0.2 in training, we got a worse metrics than the result if we set max_iterations=2, step_size=0.2 in open loop test
2、open loop test metrics: We found that some indicators were extraordinarily poor:
image
so we tried to change the plan cost weight to[1,10,10,1,1,30,100,10,10], we got the result like that:
image
It seems that there is still a gap with the indicators in the paper.
3、open loop test Visualization results:

3c9e5857e5cd760f.mp4
4bf10a321ae77a4a.mp4

https://user-images.githubusercontent.com/33254134/200212218-6143c223-2772-4555-9abf-f114c14871b2.mp4
It seems that the visual effect is general, especially when turning.

by the way, the above open loop test matrics we obtained in the case of max_iterations=2, step_size=0.2.

MCZhi commented

Hi, thank you for your interest in our work. Here are my answers to your questions and some suggestions.

  1. I am not sure what kind of metrics you are referring to, but I think that using a smaller number of iterations could lead to better ADE or FDE if the initialization is already close to the ground truth. Even using the initial trajectory can give you better metrics in terms of position error but other metrics could be worse.
  2. The poor acceleration and jerk metrics may be because you are using fewer iterations and using your manually designed cost function may not deliver the expected results, especially in closeness to humans. I would suggest still using the learned cost function.
  3. The video files seem broken and unable to watch. It would be helpful for me to analyze your questions if you could provide functioning files.

Please feel free to let me if you have any further questions.

Hi, thank you for your interest in our work. Here are my answers to your questions and some suggestions.

  1. I am not sure what kind of metrics you are referring to, but I think that using a smaller number of iterations could lead to better ADE or FDE if the initialization is already close to the ground truth. Even using the initial trajectory can give you better metrics in terms of position error but other metrics could be worse.
  2. The poor acceleration and jerk metrics may be because you are using fewer iterations and using your manually designed cost function may not deliver the expected results, especially in closeness to humans. I would suggest still using the learned cost function.
  3. The video files seem broken and unable to watch. It would be helpful for me to analyze your questions if you could provide functioning files.

Please feel free to let me if you have any further questions.

Thank you for your reply!

  1. we used 5 pretrained epoch and totally 20 epoch just as you provide in open source code. And if we use max_iterations=50 step_size=0.2,we got worse result almost in every matrics of open loop test.
  2. we just change the self.register_buffer('scale', torch.tensor([...])) in predictor.py, they are still Learnable.
1.mp4
2.mp4
3.mp4

sorry about the broken vedios~
The above three are open-loop test videos, and AV seems to be out of control in the closed-loop test.
So, we are wondering if you can share your choosen dataset, to help us to reproduce the effect in the paper.
We look forward to your reply

MCZhi commented

Sorry for the late reply. I guess the possible reason is that the motion planner is not properly learned, could be the cost function or initialization. The chosen dataset I think is not an issue here, and you may refer to #5 for some information.