CN-UPB/NFVdeep

Question to understand resulting placement

yingchaot opened this issue · 8 comments

Hello, thank you for providing a great paper code reproduction! I would like to ask, after I run the script.py, the generated placement.txt shows that the vnf in the sfc is only placed on node 6. I don't know what the problem is. Can you help me answer it?

Hi, thanks for the kind words.

Without yet looking at the setup or script, is the outcome surprising in terms of placement? Could it be possible that placement at node 6 is actually intended and not a problem?

@stwerner97 @NilsLuca Maybe you know more?

Hi @yingchaot, sorry for the late reply!

What configuration did you use for the run? Did the training of the agent converge or increase at all? As far as I recall, we were unable to reproduce the results published in the paper. Although the agent improved over time, it could not match the results of some heuristics we tried. Still, I don't recall issues where the agent used only a single location for the placement.

Unfortunately, I am quite busy at the moment, but I will try to look into the issue at the end of next week.

Hello @stefanbschneider @stwerner97 ! Thank you for your reply!
Sorry for not accurately describing the problem before. I used abilene.gpickle and requests.json under the data folder, using PPO as the agent. The following figures are my final training results:
1!
2
If possible, I would like to ask two more questions: 1) When I run the script.py, I change an agent, such as DQN, the request acceptance rate, average bandwidth consumption and other indicators are always 0, and no training is conducted. The training situation is shown in the following figure:
3
2) You set the bandwidth unit coust in the environment/netwrok.py file to 0.006, while in the original paper it was 0.0006. Did you specifically set this?
You can reply me at a convenient time, thank you again!

Hi @yingchaot, thanks for the reply! Did you also take a look at the episode returns of the agent? I think stable_baselines3 logs these as eval/mean_reward or similar. In response to your other questions

  1. We mainly tested the implementation with PPO. That the script does not run for other agents is a bug that I'll try to correct.
  2. This is just an oversight; thanks for spotting the typo! If I recall correctly, we had some general issues reproducing their simulation results since the paper does not state the parameter values for node capacities nor for request demands in detail.

I think it's best to treat our implementation as an attempt to reproduce their general approach to service coordination, not their simulation results.

Hi @stwerner97, Thank you for your patient answer! However, the logs file does not display the forward parameter. I only find two related parameters, as shown in the following figure:
image
Have a nice weekend!

Hi @yingchao, the linked PR should fix the issues you have mentioned with regard to the evaluation. Moreover, it changes the default request scenario (i.e., increases the load).

I'll close the issue for now. If you have any additional questions, feel free to reopen it!

Thanks for all the effort @stwerner97 !

w1w9 commented

Hello sir, .I can understand the rough process, but I do not know how to use the experiment results and use them to make the figures like that. Could you give me an instruction about the experimental indicators ?(logs, placements in script.py and ray results in tuning.py). I am looking forward to your reply very much.
image
image
image