FaaSProfiler is a tool for testing and profiling FaaS platforms. We built FaaSProfiler based on the real needs and limitations we faced early on conducting our serverless research:
- Arbitrary mix of functions and invocation patterns. FaaSProfiler enables the description of various invocation patterns, function mixes, and activity windows in a clean, user-friendly format.
- FaaS-testing not plug-and-play. Each function should be invoked independently at the right time. Precisely invoking hundreds or thousands of functions per second needs a reliable, automated tool. We achieve this with FaaSProfiler.
- Large amount of performance and profiling data. FaaSProfiler enables fast analysis of performance profiling data (e.g., latency, execution time, wait time, etc.) together with resource profiling data (e.g. L1-D MPKI, LLC misses, block I/O, etc.). The user can specify which parameters to profile and make use of the rich feature sets of open-source data analysis libraries like Python pandas
We have used FaaSProfiler for our research and will continue to use and improve it. We hope it accelerates testing early-stage research ideas for others by enabling quick and precise profiling of FaaS platforms on real servers. Enjoy this tool, and please don't forget citing our research paper when you use it:
Mohammad Shahrad, Jonathan Balkind, and David Wentzlaff. "Architectural Implications of Function-as-a-Service Computing." 2019 52nd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO 52), October 2019.
[You don't need this if you intend to use the generic endpoint mode to test remote functions.]
FaaSProfiler has been fully tested on OpenWhisk. Please make sure to set up and install OpenWhisk before using FaaSProfiler.
Important Note: Some of the default OpenWhisk configuration limits might be too restrictive for your setup. Do not forget to configure those parameters (particularly these: invocationsPerMinute
, concurrentInvocations
, firesPerMinute
, and sequenceMaxLength
).
After cloning this repo run the one-time configuration script:
bash configure.sh
Activate the virtual environment:
source venv_faasprofiler/bin/activate
You can deactivate the virtual environment at the end by running:
deactivate
The first step is to prepare a workload configuration file that tells FaaSProfiler about your scenario. A sample workload configuration file, called workload_configs_local_openwhisk.json, has been provided. You can base your own on this JSON file and configure it. Here are some details:
- Primary fields:
test_duration_in_seconds
: Determines the length of the test in seconds.random_seed
: If set tonull
, the randomization seed varies with time. For deterministic invocations set this variable to a 32-bit unsigned integer.blocking_cli
: This true/false option determines whether consecutive invocations use blocking cli calls.endpoint
: Specifies the endpoint type for functions to be invoked. By default, the value is set to"local_openwhisk"
, which denotes functions are deployed on an OpenWhisk deployed locally (on the same machine that hosts FaaSProfiler). Changing the value to"generic"
allows invoking remote functions (e.g., on AWS Lambda, Google Cloud Functions, etc.). Skimming through the workload_configs_generic_endpoint.json can guide you on how to use this option. Note that when invoking remote functions with the generic endpoint, you won't be able to do performance profiling and effectively, FaaSProfiler becomes a tool to assist you in creating precise and reproducible invocations.instances
: This is a collection of invocation instances. Each instance describes the invocation behavior for an application (OpenWhisk action). However, multiple instances of the same application can also be deployed with different distributions, input parameters, or activity windows to create more complicated patterns.
- Each invocation instance:
application
: This should be the same as the OpenWhisk action name. (You can see the list of successfully built OpenWhisk actions usingwsk action list -i
)- FaaSProfiler supports two invocation types. You need to select one, for each instance, and configure it accordingly.
i. Synthetic traffic:
1.
distribution
: SWI currently supports Uniform and Poisson distributions. 2.rate
: Function invocations per second. For a Poisson distribution, this is lambda. A rate of zero means no invocations. ii. Trace-based traffic: 1.interarrivals_list
: The list of interarrival times. This mode allows replaying real traces using FaaSProfiler. activity_window
: If set tonull
, the application is invoked during the entire test. By setting a time window, one can limit the activity of the application to a sub-interval of the test. There is no need to provide this parameter when using trace-based traffic.param_file
: This optional entry allows specifying an input parameter JSON file, similar to option-P
in WSK CLI.data_file
: This optional entry allows specifying binary input files such as images for the function.
- Performance monitoring (
perf_monitoring
) field, where you can specify:runtime_script
: This is a script that is run at the beginning of the test. Therefore, it allows specifying performance monitoring tools such as perf, pqos, or blktrace to run at the same time as the test. An example for this script is provided at monitoring/RuntimeMonitoring.sh. This field can be ignored by setting it tonull
.post_script
: This is a script that runs after the test ends which aims to allow automating post-analysis. This field can be ignored by setting it tonull
.
We advise the user to go over the workload_configs.json file to familiarize themselves with these fields.
Simply run the following script (replace CONFIG_FILE
with the name of your workload config file):
./WorkloadInvoker -c CONFIG_FILE
Test logs can be found in logs/SWI.log
.
The Workload Analyzer module analyzes a workload after it is run. Here is how to use it:
- Run the Workload Analyzer:
./WorkloadAnalyzer -r -p
- Certain features can be controlled by input arguments:
-v
or--verbose
: prints the detailed test data-p
or--plot
: plots the test results-s
or--save_plot
: save test result plots (this option should be used with-p
)-a
or--archive
: archive the test results in an pickle file (in thedata_archive
directory)-r
or--read_results
: also gather the results of function invocations
- Analysis logs can be found in
logs/WA.log
.
The Comparative Analyzer module compares the results of tests archived in the data_archive
directory.
Environment/Tool | Tested Version(s) |
---|---|
Python | 3.8 (fully tested), 3.10 (partially tested), 3.12 (partially tested) |
OS | Ubuntu 16.04.4 LTS, Ubuntu 20.04.1 LTS, Ubuntu 24.04 LTS |
Python Library | Latest Tested Version |
---|---|
requests-futures | 1.0.1 |
matplotlib | 3.3.3 |
numpy | 1.19.5 |
pandas | 1.2.0 |
seaborn | 0.11.1 |