1. DRLforMKP
A Deep Reinforcement Learning-Based Scheme for Solving Multiple Knapsack Problems
This project shows the official codes that used in A Deep Reinforcement Learning-Based Scheme for Solving Multiple Knapsack Problems
Appl. Sci. 2022, 12(6), 3068; https://doi.org/10.3390/app12063068
I used it in spyder IDE, and the scripts are as follow
CREATING ITEM AND KNAPSACK INSTANCES
- runfile('C:/yourdirectory/RI.py', wdir='C:/yourdirectory',args='1000 50 3 10 80')
- runfile('C:/yourdirectory/LI.py', wdir='C:/yourdirectory',args='1000 50 3 10 10')
- runfile('C:/yourdirectory/QI.py', wdir='C:/yourdirectory',args='1000 50 1 10 20')
TRAIN AND TEST (IN HERE, THE TRAIN FILE SHOULD BE HARD CODED IN A3C MODE)
- runfile('C:/yourdirectory/train.py', wdir='C:/yourdirectory',args='1000 0.0001 50 1000 5 0.9999999 6 4 0')
- runfile('C:/yourdirectory/test.py', wdir='C:/yourdirectory',args='1000 0.0001 50 1 5 0.9999999 6 4')
COMPARISON ALGORITHM TO RUN, GUROBI, YOU NEED A LICENSE
- runfile('C:/yourdirectory/random_sol_knap.py', wdir='C:/yourdirectory',args='1000 0.001 50 1 1 0.99')
- runcell(0, 'C:/yourdirectory/gurobi_op_mul.py')
- runfile('C:/yourdirectory/ffh_mul.py', wdir='C:/yourdirectory',args='1000 0.001 50 1 1 0.99')
I will delete the redundant part ASAP, but the code works well in here.