Implementation of the IEEE TII paper Flexible Job Shop Scheduling via Graph Neural Network and Deep Reinforcement Learning. IEEE Transactions on Industrial Informatics, 2022.
@ARTICLE{9826438,
author={Song, Wen and Chen, Xinyang and Li, Qiqiang and Cao, Zhiguang},
journal={IEEE Transactions on Industrial Informatics},
title={Flexible Job Shop Scheduling via Graph Neural Network and Deep Reinforcement Learning},
year={2023},
volume={19},
number={2},
pages={1600-1610},
doi={10.1109/TII.2022.3189725}
}
- python
$\ge$ 3.6.13 - pytorch
$\ge$ 1.8.1 - gym
$\ge$ 0.18.0 - numpy
$\ge$ 1.19.5 - pandas
$\ge$ 1.1.5 - visdom
$\ge$ 0.1.8.9 - matplotlib
$\ge$ 3.5.3
Note that pynvml
is used in test.py
to avoid excessive memory usage of GPU. The code has been modified so that it doesn't call those pynvml
relevant functions when using CPU. To allow our program to automatically set up the visdom
server and open the browser for you, remember to grant administrator or Internet access privilege to the python interpreter you use. You may use another console and run python -m visdom.server
using the virtual environment you created to start the server manually, and hopefully later the Python interpreter will have the privilege to visit the Internet.
For compatibility concerns, we locked the version of the above packages with a suit of capable packages (some are of higher versions than the least requirement) in requirements.yml
using conda
. You can install them by running:
conda env create -f requirements.yml
This is useful especially as the gym
package is keeping updating and the latest version may not be compatible with our code.
After installing the packages, you can activate the environment by running:
conda activate fjsp-drl
data_dev
anddata_test
are the validation sets and test sets, respectively.data
saves the instance files generated by./utils/create_ins.py
env
contains code for the DRL environmentgraph
is part of the code related to the graph neural networkmodel
saves the model for testingresults
saves the trained modelssave
is the folder where the experimental results are savedutils
contains some helper functionsconfig.json
is the configuration filemlp.py
is the MLP code (referenced from L2D)PPO_model.py
contains the implementation of the algorithms in this article, including HGNN and PPO algorithmstest.py
for testingtrain.py
for trainingvalidate.py
is used for validation without manual calls
There are various experiments in this article, which are difficult to be covered in a single run. Therefore, please change config.json
before running.
Note that disabling the validate_gantt()
function in schedule()
can improve the efficiency of the program, which is used to check whether the solution is feasible.
python train.py
Note that there should be a validation set of the corresponding size in ./data_dev
.
python test.py
Note that there should be model files (*.pt
) in ./model
.