jangirrishabh/Overcoming-exploration-from-demos

the number of CPU cores to use

Closed this issue · 1 comments

When I set the --num_cpu > 1, I have to wait for a long time at this line:

2018-12-26 11:20:50.480133: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2018-12-26 11:20:50.480139: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2018-12-26 11:20:50.480292: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6899 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:03:00.0, compute capability: 6.1)
Training...

It seems like the program doesn't enter the training phase.
Is there any especially setting to use cpu more than 1 ?

sorry, it might no problem.
I wondered there is no print, however; it is probably because of the multi processing.
Thanks!