This project aims to develop a reinforcement learning (RL) agent that can stack cubes in a simulated environment.
Before getting started, make sure you have the following installed:
- Python 3.x
- RL libraries (e.g., OpenAI Gym, Stable Baselines3)
-
Clone the repository:
https://github.com/salvingeorge/cube_stacking_rl.git
-
Navigate to the project directory:
cd cube_stacking_rl
-
Create a virtual environment (optional but recommended):
python3 -m venv venv source venv/bin/activate
-
Install the required dependencies:
pip install -r requirements.txt
-
download the dataset
python -m mani_skill2.utils.download_demo "StackCube-v0"
-
Convert the dataset into trajectories:
python -m mani_skill2.trajectory.replay_trajectory --traj-path demos/v0/rigid_body/StackCube-v0/trajectory.h5 --save-traj -o state -c pd_ee_delta_pose --num-procs 8
-
Train the agent:
python state_imitation_learning_stackcube.py --demos=demos/v0/rigid_body/StackCube-v0/trajectory.state.pd_ee_delta_pose.h5
This will load the train the model and evaluate its performance.
When prompted, enter your wandb details
If you would like to contribute to this project, follow these steps:
-
Fork the repository on GitHub.
-
Clone your forked repository:
git clone https://github.com/your-username/cube_stacking_rl.git
-
Create a new branch for your changes:
git checkout -b feature/your-feature-name
-
Make your changes and commit them:
git commit -m "Add your commit message here"
-
Push your changes to your forked repository:
git push origin feature/your-feature-name
-
Open a pull request on GitHub.
This project is licensed under the MIT License. See the LICENSE file for more details.