FC2Q: Exploiting Fuzzy Control in Server Consolidation for Cloud Applications with SLA Constraints.
This is a meta-project for the code used in the experimental evaluation presented in the article:
Cosimo Anglano and Massimo Canonico and Marco Guazzone
FC2Q: Exploiting Fuzzy Control in Server Consolidation for Cloud Applications with SLA Constraints
Concurrency Computat.: Pract. Exper., 27(17):4491-4514, 2015.
doi: 10.1002/cpe.3410
Please, cite this project as follows (BibTeX format):
@ARTICLE{CPE:CPE3410,
author = {Cosimo Anglano and Massimo Canonico and Marco Guazzone},
title = {{FC2Q}: Exploiting Fuzzy Control in Server Consolidation for Cloud Applications with {SLA} Constraints},
journal = {Concurrency and Computation: Practice and Experience},
volume = {27},
number = {17},
issn = {1532-0634},
url = {http://dx.doi.org/10.1002/cpe.3410},
doi = {10.1002/cpe.3410},
pages = {4491--4514},
keywords = {cloud computing, resource management, feedback control, fuzzy control, server consolidation, virtualized cloud applications},
year = {2015},
}
Modern cloud data centers rely on server consolidation (the allocation of several virtual machines on the same physical host) to minimize their costs.
Choosing the right consolidation level (how many and which virtual machines are assigned to a physical server) is a challenging problem, because contemporary multitier cloud applications must meet service level agreements in face of highly dynamic, nonstationary, and bursty workloads.
In (Anglano,2015), we deal with the problem of achieving the best consolidation level that can be attained without violating application service level agreements.
We tackle this problem by devising fuzzy controller for consolidation and QoS (FC2Q), a resource management framework exploiting feedback fuzzy logic control, that is able to dynamically adapt the physical CPU capacity allocated to the tiers of an application in order to precisely match the needs induced by the intensity of its current workload.
We implement FC2Q on a real testbed and use this implementation to demonstrate its ability of meeting the aforementioned goals by means of a thorough experimental evaluation, carried out with real-world cloud applications and workloads.
Furthermore, we compare the performance achieved by FC2Q against those attained by existing state-of-the-art alternative solutions, and we show that FC2Q works better than them in all the considered experimental scenarios
The project is composed by two macro-modules:
-
server-side module contains code to be run on the server-side (e.g., inside a virtual machine (VM)). The module is composed by:
server/olio-dev.zip
: the archive containing the patched sources of the Apache Olio application (v0.2)server/RUBiS-dev.zip
: the archive containing the patched sources of the OW2 RUBiS application (v1.4.3)server/apache-cassandra-0.7.9-bin.tar.gz
: the archive containing the binaries of the Apache Cassandra (v0.7.9)
-
client-side module contains code to be run on the client-side. The module is composed by:
client/boost-ublasx-v1.zip
: the archive containing the sources of the Boost.uBLASx (v1)client/dcsxx-commons-v2.zip
: the archive containing the sources of the dcsxx-commons (v2)client/dcsxx-control-v2.zip
: the archive containing the sources of the dcsxx-control (v2)client/dcsxx-sysid-v2.zip
: the archive containing the sources of the dcsxx-sysid (v2)client/dcsxx-testbed-v2.zip
: the archive containing the sources of the dcsxx-testbed (v2)client/rain-workload-toolkit-dev.zip
: the archive containing the sources of the RAIN workload toolkit (development version)client/YCSB-dev.zip
: the archive containing the sources of the YCSB benchmark (development version)
For a more updated version of the above components, see the related project Web sites.
In your host operating system, you must install the Xen hypervisor.
Furthermore, each tier of the applications you plan to run, should be installed in a separate VM.
Finally, to enable remote communications between the Xen hypervisor and client-side components, you need to install the libvirt virtualization API and run the libvirtd
daemon.
In our experiments, we used the Fedora 18 Linux distribution as the host operating system. For a guide on how to install and setup Xen on Fedora you can, for instance, refer to the Fedora Host Installation page of the Xen's wiki site.
For each application tier, we setup a dedicated VM, that is:
- For Olio, we created two VMs, one for the Web tier and another one for the DB tier.
- For RUBiS, we created two VMs, one for the Web tier and another one for the DB tier.
- For Cassandra, we created only one VM.
For each of the above VM, we used the CentOS 6.5 Linux distribution as the guest operating system.
The libvirtd
daemon is to be started on the host system in order to enable remote communications between the client-side components and the (server-side) VMM hypervisor (Xen in our case).
In the following we setup libvirt in order to accept remote communications by using secure TLS connections on the public TCP/IP port.
When we refer to the server machine we mean the machine that accepts remote requests and where a libvirtd
daemon instance is running (i.e., the host system).
Instead, when we refer to the client machine we mean the machine from which you issue remote requests (i.e., the machine that runs the client-side components).
In the rest of this section, we assume a Red Hat Linux like operating system.
-
On the server machine, it is necessary to start the
libvirtd
server in listening mode by running it with the--listen
option or by editing the/etc/sysconfig/libvirtd
file to add theLIBVIRTD_ARGS="--listen"
line, in order to causelibvirtd
to come up in listening mode whenever it is started. -
Also, in order to accept remote connections, you need to open in your firewall the
libvirtd
port, which is usually16514
(check your/etc/libvirt/libvirtd.conf
file), for the TCP protocol. -
Setup a Certificate Authority (CA), for instance, by using the
certtool
utility from the GnuTLS library. After this step, you have two files, say:cakey.pem
: your CA's (secret) private keycacert.pem
: your CA's (public) certificate
-
Install the file
cacert.pem
on both the client and server machines to let them know that they can trust certificates issued by your CA. The usual installation directory forcacert.pem
is/etc/pki/CA
-
Issue servers certificates. In the server machine, you need to issue a certificate with the X.509 CommonName (CN) field set to the host name of the server. The CN must match the host name that clients will use to connect to the server. After this step, you have two files, say:
serverkey.pem
: is the server's private keyservercert.pem
: is the server's certificate
that have to be installed on the server. Also note that the
serverkey.pem
file must have file permissions set to600
. The usual installation directory forserverkey.pem
is/etc/pki/libvirt/private
. The usual installation directory forservercert.pem
is/etc/pki/libvirt
. -
Issue clients certificates. In the client machine you need to issue a certificate with the X.509 Distinguished Name (DN) set to a suitable name (e.g., the client name). Also, make sure the server host name is recognized by your client system (e.g., put it in the
/etc/hosts
file). After this step, you have two files, say:clientkey.pem
: is the client's private keyclientcert.pem
: is the client's certificate
that have to be installed on the client. The usual installation directory for
clientkey.pem
is/etc/pki/libvirt/private
. The usual installation directory forclientcert.pem
is/etc/pki/libvirt
.
- Create two VMs, one for the Web tier and another one for the DB tier.
- Copy the file
server/olio-dev.zip
inside each VM. - Unzip the file
olio-dev.zip
. - Follow building instructions (for the tier running inside the VM) at
olio/docs/php_setup.html
.
- Create two VMs, one for the Web tier and another one for the DB tier.
- Copy the file
server/RUBiS-dev.zip
inside each VM. - Log into each VM.
- Unzip the file
RUBiS-dev.zip
. - Follow building instructions (for the tier running inside the VM) at
RUBiS/README.md
.
The Apache Cassandra version included in this distribution does not need to be compiled since it already comes in a binary form.
- Create one VM
- Copy the file
server/apache-cassandra-0.7.9-bin.tar.gz
inside the VM. - Log into the VM.
- Unzip the file
apache-cassandra-0.7.9-bin.tar.gz
- Follow instructions at Datastax
- A modern C++98 compiler (e.g., GCC v4.8 or newer is fine)
- The GNU make tool
- Apache Ant (v1.8 or newer)
- Apache Maven (v3 or newer)
- Boost C++ libraries (v1.54 or newer)
- Boost.Numeric Bindings library (v2 or newer)
- fuzzylite fuzzy logic control library (v4 or newer)
- LAPACK Linear Algebra PACKage (v3.5 or newer)
- libvirt virtualization API library (v1.1 or newer)
- Oracle Java SE SDK (v7 or newer)
Also, for a more detailed list of requirements, see the documentation of the various included sub-projects.
-
Unzip file
files/rain-workload-toolkit-dev.zip
-
Change current directory into
rain-workload-toolkit
-
Run
ant package
-
For Olio workload driver:
- Edit
config/rain.config.olio.json
andconfig/profiles.config.olio.json
(seerain-workload-toolkit/src/radlab/rain/workload/olio/README.md
, for more information) - Run
ant package-olio
- Edit
-
For RUBiS workload driver:
- Edit
config/rain.config.rubis.json
andconfig/profiles.config.rubis.json
(seerain-workload-toolkit/src/radlab/rain/workload/rubis/README.md
, for more information) - Run
ant package-rubis
- Edit
- Unzip file
files/YCSB-dev.zip
- Change current directory into
YCSB
- Run
mvn clean package
(seeYCSB/BUILD
andYCSB/README.md
for more information)
- Unzip file
files/dcsxx-testbed-v2.zip
- Follow instructions in
dcsxx-testbed/README.md
Bug notification and patches are always welcomed. Also, other type of contributions (e.g., new features or improvement) are welcomed, as well.
Please, note that, since this is only a meta-project (i.e., a container for other sub-projects), for feedback and contributions you are asked to refer to the specific sub-project.
- (Anglano,2015) C. Anglano, M. Canonico and M. Guazzone. FC2Q: Exploiting Fuzzy Control in Server Consolidation for Cloud Applications with SLA Constraints, Concurrency Computat.: Pract. Exper., 27(17):4491-4514, 2015.