DDPG_Fetch
Exploring the performance of Prioritized Experience Replay (PER) with the DDPG+HER scheme on the Fetch Robotics Environemnt
Plots for Mean Success Rates for different Fetch Environments
Performance Plots when varying the alpha parameter on PER
* Correction: The plot on the right is for FetchSlide but has been mistakenly labelled as FetchPushAddition of PER along with finetuning the alpha parameter boosts its performance.
The inclusion of the PER algo within the DDPG-HER framework can be done in many ways, it could give greater performance boosts if combined well. (The integration of PER in this code isn't perfect, just something I tried out over a weekend)
Use the command below to start training. (Avoid using sudo, if you get an "EXPORT LIBRARY.. .bashrc" error)
''' mpirun -np 19 python3 train.py '''