idreesshaikh/Autonomous-Driving-in-Carla-using-Deep-Reinforcement-Learning

Why can't agents train using gpu?

heyoupeng819 opened this issue · 1 comments

I see the torch. device ('cpu ') in your code. I tried to modify it to torch. device ('cuda: 0'), but it doesn't seem to work.
my torch.cuda.is_ available() returns true.

You can definitely change it to gpu, but then you've to change couple of things in algorithm to process everything on gpu. As I understand what you're facing is device inconsistency since half of the processing is done on gpu and half on cpu.