Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks
Python code to reproduce our DROO algorithm for Wireless-powered Mobile-Edge Computing [1], which uses the time-varying wireless channel gains as the input and generates the binary offloading decisions. It includes:
-
memory.py: the DNN structure for the WPMEC, inclduing training structure and test structure, implemented based on Tensorflow 1.x.
- memoryTF2.py: Implemented based on Tensorflow 2.
- memoryPyTorch.py: Implemented based on PyTorch.
-
optimization.py: solve the resource allocation problem
-
data: all data are stored in this subdirectory, includes:
- data_#.mat: training and testing data sets, where # = {10, 20, 30} is the user number
-
main.py: run this file for DROO, including setting system parameters, implemented based on Tensorflow 1.x
- mainTF2.py: Implemented based on Tensorflow 2.
- mainPyTorch.py: Implemented based on PyTorch.
-
demo_alternate_weights.py: run this file to evaluate the performance of DROO when WDs' weights are alternated
-
demo_on_off.py: run this file to evaluate the performance of DROO when some WDs are randomly turning on/off
- L. Huang, S. Bi, and Y. J. Zhang, “Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks,” IEEE Trans. Mobile Compt., DOI:10.1109/TMC.2019.2928811, Jul. 2019.
-
Liang HUANG, lianghuang AT zjut.edu.cn
-
Suzhi BI, bsz AT szu.edu.cn
-
Ying Jun (Angela) Zhang, yjzhang AT ie.cuhk.edu.hk
-
Tensorflow
-
numpy
-
scipy
-
For DROO algorithm, run the file, main.py. If you code with Tenforflow 2 or PyTorch, run mainTF2.py or mainPyTorch.py, respectively.
-
For more DROO demos:
- Laternating-weight WDs, run the file, demo_alternate_weights.py
- ON-OFF WDs, run the file, demo_on_off.py
- Remember to respectively edit the import MemoryDNN code from
to
from memory import MemoryDNN
orfrom memoryTF2 import MemoryDNN
if you are using Tensorflow 2 or PyTorch.from memoryPyTorch import MemoryDNN