Modelling resource constraints: Compute and Memory Units
fruffy opened this issue · 3 comments
Hello,
I have a question about the general model and intention of the resource constrains in Sonata.
Assuming I have a 8 core and 32GB RAM machine, and I would like to model 8 data centres using the sonata emulator. We define network functions to consume 512 MB and 1/8 of a core and each datacenter will have 1 core and memory of 3584 MB available.
This means that each datacentre will be able to host a maximum of seven network functions.
The flavor of a network function will be 1 compute units, 512 MB memory, a data centre will have 8 compute units to model a single core.
As of this moment, a basic model of our code is composed as follows:
MAX_CU = 8 # max compute units per machine, equivalent to one core
MAX_MU = 3584 # max memory units per machine, can host 7 VNFS
MAX_CU_NET = 64 # max compute unit available on the host machine
MAX_MU_NET = 28672 # max memory to be made available on the host machine
net = DCNetwork(controller=RemoteController, monitor=True,
dc_emulation_max_cpu=MAX_CU_NET, dc_emulation_max_mem=MAX_MU_NET,
enable_learning=True)
# Add 8 "datacentres" (or servers) to the network
chain_server1 = net.addDatacenter('chain-server1')
chain_server2 = net.addDatacenter('chain-server2')
chain_server3 = net.addDatacenter('chain-server3')
chain_server4 = net.addDatacenter('chain-server4')
chain_server5 = net.addDatacenter('chain-server5')
chain_server6 = net.addDatacenter('chain-server6')
chain_server7 = net.addDatacenter('chain-server7')
chain_server8 = net.addDatacenter('chain-server8')
# Create one resource model per "dc"(or server)
rm1 = UpbSimpleCloudDcRM(MAX_CU, MAX_MU)
rm2 = UpbSimpleCloudDcRM(MAX_CU, MAX_MU)
rm3 = UpbSimpleCloudDcRM(MAX_CU, MAX_MU)
rm4 = UpbSimpleCloudDcRM(MAX_CU, MAX_MU)
rm5 = UpbSimpleCloudDcRM(MAX_CU, MAX_MU)
rm6 = UpbSimpleCloudDcRM(MAX_CU, MAX_MU)
rm7 = UpbSimpleCloudDcRM(MAX_CU, MAX_MU)
rm8 = UpbSimpleCloudDcRM(MAX_CU, MAX_MU)
# Assign each resource model to the corresponding server
chain_server1.assignResourceModel(rm1)
chain_server2.assignResourceModel(rm2)
chain_server3.assignResourceModel(rm3)
chain_server4.assignResourceModel(rm4)
chain_server5.assignResourceModel(rm5)
chain_server6.assignResourceModel(rm6)
chain_server7.assignResourceModel(rm7)
chain_server8.assignResourceModel(rm8)
It seems to work fine and does not fail. However, I am concerned about the correctness of this approach. Is this how the resource model of Sonata is meant to be used, or are we potentially misinterpreting some functionalities (e.g., modelling the network maximum as the total amount of resources across all dcs)?
Thank you,
fruffy
Good morning @fruffy,
maybe I can clarify some things.
The first thing you need to understand is that the resource models only modify the memory limits and the CPU shares (available CPU time fraction) of a container. NOT the number of cores used per container or datacenter.
For the CPU model this means, you first need to set the overall CPU time fraction that should be dedicated to the emulated VNFs (the containers) which is done by your code with the parameter MAX_CU_NET
. Important here is that this describes the fraction of CPU time so it should be a value between 0.0 and 1.0 (e.g. 0.7 to use max. 70% of CPU cycles of the host machine for all containers in your emulation). I assume this needs to be changed here since you use 64
(maybe 0.64 is a good option for you). Using 1.0 does normally not make sense since you need to leave some CPU resources to your host os etc. ... Memory is different: Here the absolute value in MByte has to be given (this looks good in your code).
Then you can specify the abstract compute units per datacenter as you did with MAX_CU
. Here you can really use any positive natural numbers since this is only an abstract unit. The selected resource model will then compute the fraction of CPU time of a single container based on the CU values assigned to the started container (e.g. MAX_CU_NET / sum(MAX_CU of each dc) = fraction of CPU time that corresponds to one CU requested by container instantiation
). The model will reject the instantiation of a container when CU of already running containers + new request > MAX_CU for a datacenter (no oversubscription).
You see, the CPU model does not have any influence on the number of CPU cores used, only the CPU time! This means all available CPU cores are always used in this model.
However, you can dedicated cores to containers by setting the cpu_set
parameter when they are instantiated. Which means you could also modify the used code to allocate all containers of one emulated DC to a single CPU core. But this requires some (smaller) modifications of the code (maybe subclassing the resource model might be a good option to start here).
Did you check this paper:
M. Peuster, H. Karl and S. van Rossem, "MeDICINE: Rapid prototyping of production-ready network services in multi-PoP environments," 2016 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Palo Alto, CA, USA, 2016, pp. 148-153. doi: 10.1109/NFV-SDN.2016.7919490
Equation (1) in section III.G explains the resource model you are using.
Does this help? Let me know if you have more questions.
Best,
Manuel
Hi Manuel,
thanks a lot, that was very helpful. I have read the paper (that is how we found Sonata!), but I was not quite sure about how units were related to actual computing power, especially the max compute of the data centre. Our intention was to "emulate" our cores in terms of computing power. We were not specifically concerned about the right amount of cores. So eight compute units basically expressed the power of one core in a data centre host.
We are going to remodel our code based on your suggestions!
Great. Let me know if you have any trouble.
I'll close this issue for now. If you have further questions feel free to re-open it or create a new one.