The DROO algorithm is coded based on Tensorflow 1.*. If you code with the Tensorflow 2, please refer to DROO-tensorflow2. If you code with PyTorch, please refer to DROO-PyTorch. If you are fresh to deep learning, please start with Tensorflow 2 or PyTorch, whose codes are much cleaner and easier to follow.
Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks
Python code to reproduce our DROO algorithm for Wireless-powered Mobile-Edge Computing [1], which uses the time-varying wireless channel gains as the input and generates the binary offloading decisions. It includes:
-
memory.py: the DNN structure for the WPMEC, inclduing training structure and test structure
-
optimization.py: solve the resource allocation problem
-
data: all data are stored in this subdirectory, includes:
- data_#.mat: training and testing data sets, where # = {10, 20, 30} is the user number
-
main.py: run this file for DROO, including setting system parameters
-
demo_alternate_weights.py: run this file to evaluate the performance of DROO when WDs' weights are alternated
-
demo_on_off.py: run this file to evaluate the performance of DROO when some WDs are randomly turning on/off
- L. Huang, S. Bi, and Y. J. Zhang, “Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks,” IEEE Trans. Mobile Compt., DOI:10.1109/TMC.2019.2928811, Jul. 2019.
-
Liang HUANG, lianghuang AT zjut.edu.cn
-
Suzhi BI, bsz AT szu.edu.cn
-
Ying Jun (Angela) Zhang, yjzhang AT ie.cuhk.edu.hk
-
Tensorflow
-
numpy
-
scipy
-
For DROO algorithm, run the file, main.py
-
For DROO demo with laternating-weight WDs, run the file, demo_alternate_weights.py
-
For DROO demo with ON-OFF WDs, run the file, demo_on_off.py