The main goal of this project is **to allow a robot arm to learn by itself how to press a red button**.
An Unity environment was created to simulate the robot and his envirnement.
- The robot is in the center
- The red cube represents the button
- The green sphere represents the cube potential position
- The blue area represents the cube position limit
- The output camera is displayed at the bottom right
- A start button is implemeted to launch the simulation
The shell displays some information about the program proceedings.
The results are represented as graphs.
A server makes the link between ou python script and our .exe simulation. It then allows to communicate the robot.
The chosen solution is to use a deep reinforcement learning algorithm.
For information, some good courses about this method :
The algorithm use :
- A convolutional neural network (CNN) based on DeepMind to find the action from the robot state
- A policy gradient approach to update the CNN (
loss = -log(π)*A
)
- OS : Windows (Linux in progress)
- TensorFlow
- Numpy
- PIL
- Matplotlib
In your personnal folder, clone the GitLab environement:
git clone https://gitlab.com/RoboAcademy/UnityPySocket
From (your folder)\UnityPySocket\Python\Python
, launch the script:
python main.py
Several keystrokes will be requested from the user.
Open Unity environment (y/n)
: Open robot simulationName of the built executable:
: Inform the name of the .exe fileUse manual position (y/n)
: 2 types of script (manual or learning)
Restore session (y/ n)
: Relaunch an existing simulation (if learning script is chosen)
- Click on the
start
button of the robot environment
Here we go ! The simulation is launched !
- Jacob THORN
- Tristan BIDOUARD
With the help of : - Santiago QUINTANA-AMATE
- Pablo BERMELL-GARCIA
- Kiran KRISHNAMURTHY
From _AIRBUS GROUP UK_