Code for upcoming research publication.
In order to run this code, you must have a working ns3-gym environment.
Clone the repo so that linear-mesh
directory lands directly in ns3's scratch
.
All basic configuration can be done within the file linear-mesh/agent_training.py
(DDPG) and linear-mesh/tf_agent_training.py
(DQN).
After configuring the scenario, execute python script corresponding to the agent you want to train.
Currently, results can only be saved in a CometML workspace.
Example results for an experiment:
ToDo: add easy CometML token config