This project is part of my PhD thesis. I use the problem of resource allocation in Radio Access Network (RAN) slicing to demonstrate the need, and potential approaches for safe and accelerated DRL-based RRM. The related publication are listed in a separate section below and will be continuosly updated.
The information included in this documentation is as follows:
Make sure that you have Jupyter Notebook and that the following Python packages are installed:
- matplotlib
- numpy
- pandas
- gym
- tensorforce
- scipy
- math
Go to the examples
folder and run the notebook of interest.
Within the download you will find the following files:
SADRL-master/
├── examples/
├── Dueling_DQN agent - reward function #1 - sample traffic #1.ipynb
├── Fixed Slicing - reward function #1 - sample traffic #1.ipynb
├── Hard Slicing - reward function #1 - sample traffic #1.ipynb
├── Hybrid (Policy Reuse and Distillation) - PPO agent - reward function #2 - sample traffic #4.ipynb
├── Policy Distillation - PPO agent - reward function #2 - sample traffic #4.ipynb
├── Policy Reuse - PPO agent - reward function #2 - sample traffic #4.ipynb
├── PPO agent - reward function #1 - sample traffic #4.ipynb
├── PPO agent - reward function #2 - sample traffic #4.ipynb
├── PPO agent - reward function #3 - sample traffic #4.ipynb
├── lib/
├── agents/
├── tforce.py
├── envs/
├── slicing_env.py
├── utils.py
├── LICENSE
├── README.md
- Toward Safe and Accelerated Deep Reinforcement Learning for Next-Generation Wireless Networks
- Transfer Learning-Based Accelerated Deep Reinforcement Learning for 5G RAN Slicing
SARL-RRM is Copyright © 2021 Ahmad Nagib. It is free software, and may be redistributed under the terms specified in the LICENSE file. A human-readable summary of (and not a substitute for) the license is available at https://creativecommons.org/licenses/by-nc-sa/4.0/