wayveai/mile

Confused about cuda version

Closed this issue · 7 comments

Hello author, thank you for your excellent work!
When reproducing and evaluating your work, I encountered the following problem. I don’t know whether it is because of the device version problem or the torch version problem. This makes me very confused.

2F8158F89DAE7E7C3CB164C2C221C66A
C00569AA65149395633F598C316127D6

My device is GPU:NVIDIA GeForce RTX 4070Ti cuda-version: 12.1

I tried torch2.1 and torch1.11 corresponding to cuda12.1, which didn't seem to solve this problem. Do you have any better suggestions?

Hello, what Python version are you using? You would need python 3.8+ for the correct torch version to be installed I think (wheels of newer torch versions are not available on older python).

Hello, what Python version are you using? You would need python 3.8+ for the correct torch version to be installed I think (wheels of newer torch versions are not available on older python).

Thank you very much for your reply!
Based on your suggestion I have solved the above problem. I also want to ask you, can my device support the training part of your code? Because CARLA seems to be stuck when I run bash run/evaluate.sh. How long does it usually take to evaluate?
In addition, I cannot see the BEV picture in your work during the evaluation process. Is this normal?
图片

No problem.

The evaluation is not stuck, perhaps I should print some logs there. But it is running and after ~15-20mins the run will be finish and the driving score saved to disk. By default we do not decode/save BEV output during evaluation, but that could be enabled if you modify the code.

No problem.

The evaluation is not stuck, perhaps I should print some logs there. But it is running and after ~15-20mins the run will be finish and the driving score saved to disk. By default we do not decode/save BEV output during evaluation, but that could be enabled if you modify the code.

I am very happy and grateful for your reply!
Will there be decode/save BEV output during training?
My device is one GPU: NVIDIA GeForce RTX 4070Ti Ubuntu18.04, do you think my device can be used to try training?
Thanks again!

By default the BEV decoding is deactivated so you'll need to activate it. Your GPU should be suitable to train the model.

By default the BEV decoding is deactivated so you'll need to activate it. Your GPU should be suitable to train the model.

Okay, thank you very much for your reply! You are really an excellent and kind researcher, thank you again for answering my questions.

No problem at all, thanks for your kind words!