*Note: This project has been reimplemented using Ansible, please check BenchFaster.
This repository contains the scripts for the automated deployment of BenchFaaS.
- F. Carpio, M. Michalke and A. Jukan, "BenchFaaS: Benchmarking Serverless Functions in an Edge Computing Network Testbed," in IEEE Network, DOI: 10.1109/MNET.125.2200294
Five performance tests are defined using JMeter in
test_scheduler/performance_tests
directory:
- Overhead:
hello-world
function - Payload size:
payload-echo
function - Intensive:
img-classifier-hub
function - Scalability:
fib-go
function - Chains:
payload-echo-workflow
function
Serverless functions used for the benchmarks can be found here.
- Linux based OS
- Apache JMeter v5.4.3 (download)
- (Optional) Local container registry (instructions)
- VMs (Hypervisor):
- Linux based OS
- libvirt/KVM (Ubuntu | Arch Linux)
- Vagrant
(installation)
- Configure password-less sudo for NFS as explained here.
- Vagrant plugins
(installation):
vagrant-libvirt
- netem (
tc
): already included in most Linux distros.
- PMs:
- x86_64/ARM64 based devices
- Linux based OS
- GbE Switch/Router
- Nebula (installation)
For the instructions, it is assumed that all VMs/PMs are on the same LAN network with static IP addresses. However, by using Nebula the testbed also works with devices located in different networks even behind NATs or Firewalls.
All VMs/PMs need additional internet connection for the deployment of required tools, but not for the execution of benchmarks.
The public SSH key from the tester machine needs to be added to the Hypervisor and to all PMs.
- Clone this repository to the tester machine.
- Modify
config.yml
andtestbed_controller.sh
accordingly to your use case.- When using VMs, adjust:
devices.hypervisor.address
: Hypervisor's IP.devices.hypervisor.login
: Username of the hypervisor.devices.testmachine.vm_interface
: Tester machine interface connecting to the hypervisor.devices.vm.benchmark_bridge
: Hypervisor's interface to bridge the VMs, reachable from the tester machine.devices.vm.benchmark_ip
: headnode's IP reachable from the tester machine.
- When using PMs, adjust:
devices.testmachine.pm_interface
: Name of the interface connected to the PMs.devices.pm.lighthouse.address
anddevices.pm.lighthouse.port
: Nebula's address and port.devices.pm.devices
: Set of PMs for the testbed.ssh_address
: Specific IP address of the PM.login
: Username for SSH.qos_interface
: Network interface of the PM to apply WAN emulation.lighthouse
: True, if the machine is a lighthouse.
- (Optional) In both cases, local container registry, leave blank when using the default
public registry specified on the
yaml
files:*.repoip
: IP of the machine with the local container registry.*.repoport
: port of the machine with the local container registry.*.privaterepo
: Name of the local container registry.
- When using VMs, adjust:
- Run
./testbed_controller.sh
from the tester machine.
If you get the following error when deploying VMs using qemu-kvm
:
Error while creating domain: Error saving the server: Call to virDomainDefineXML failed: invalid argument: could not get preferred machine for /usr/bin/qemu-system-x86_64 type=kvm
Check the solution from here.
If you get the following error when deploying VMs using qemu-kvm
on Ubuntu:
Error while creating domain: Error saving the server: Call to virDomainDefineXML failed: Cannot check QEMU binary /usr/libexec/qemu-kvm: No such file or directory
Check the solution from here