RLOpensource/IMPALA-Distributed-Tensorflow

Single Machine execution

Pranav-India opened this issue · 1 comments

Hi I want to implement the IMPALA on my personal machine (cpu only) can you tell me what are the changes needed I tried the basic changes after referring to the scalable_agnet implementation by deepmind but this code does not work.

I'm sorry to late for answering this question.

I think that the first line of start.sh from

python trainer_breakout.py --num_actors=32 --task=0 --batch_size=32 --queue_size=128 --trajectory=20 --learning_frame=1000000000 --start_learning=0.0006 --end_learning=0.0 --discount_factor=0.99 --entropy_coef=0.05 --baseline_loss_coef=1.0 --gradient_clip_norm=40.0 --job_name=learner --reward_clipping=abs_one --lstm_size=256 &

to

CUDA_VISIBLE_DEVICES=-1 python trainer_breakout.py --num_actors=32 --task=0 --batch_size=32 --queue_size=128 --trajectory=20 --learning_frame=1000000000 --start_learning=0.0006 --end_learning=0.0 --discount_factor=0.99 --entropy_coef=0.05 --baseline_loss_coef=1.0 --gradient_clip_norm=40.0 --job_name=learner --reward_clipping=abs_one --lstm_size=256 &