Faasm is a high-performance stateful serverless runtime.
Faasm provides multi-tenant isolation, yet allows functions to share regions of memory. These shared memory regions give low-latency concurrent access to data, and are synchronised globally to support large-scale parallelism.
Faasm combines software fault isolation from WebAssembly with standard Linux tooling, to provide security and resource isolation at low cost. Faasm runs functions side-by-side as threads of a single runtime process, with low overheads and fast boot times.
Faasm is built on Faabric which provides the distributed messaging and state layer.
The underlying WebAssembly execution and code generation is built using WAVM.
Faasm defines a custom host interface which extends WASI to include function inputs and outputs, chaining functions, managing state, accessing the distributed filesystem, dynamic linking, pthreads, OpenMP and MPI.
Our paper from Usenix ATC '20 on Faasm can be found here.
You can start a Faasm cluster locally using the docker-compose.yml
file in the
root of the project:
docker-compose up -d
To compile, upload and invoke a C++ function using this local cluster you can use the faasm/cpp container:
docker-compose run cpp /bin/bash
# Compile the demo function
inv func demo hello
# Upload the demo "hello" function
inv func.upload demo hello
# Invoke the function
inv func.invoke demo hello
More detail on some key features and implementations can be found below:
- Usage and set-up - using the CLI and other features.
- C/C++ functions - writing and deploying Faasm functions in C/C++.
- Python functions - isolating and executing functions in Python.
- Distributed state - sharing state between functions.
- Faasm host interface - the serverless-specific interface between functions and the underlying host.
- Kubernetes and Knative integration- deploying Faasm as part of a full serverless platform.
- Bare metal/ VM deployment - deploying Faasm on bare metal or VMs as a stand-alone system.
- API - invoking and managing functions and state through Faasm's HTTP API.
- MPI and OpenMP - executing existing MPI and OpenMP applications in Faasm.
- Developing Faasm - developing and modifying Faasm.
- Releases - instructions for releasing new versions and building container tags.
- Faasm.js - executing Faasm functions in the browser and on the server.
- Threading - executing multi-threaded applications.
- Proto-Faaslets - snapshot-and-restore to reduce cold starts.
- WAMR support - support for the wasm-micro-runtime (WIP).
- SGX - information on executing functions with SGX (WIP).
- Tensorflow Lite - performing inference in Faasm with Tensorflow Lite
- Polybench - benchmarking with Polybench/C
- ParRes Kernels - benchmarking with the ParRes Kernels
- Python performance - executing the Python performance benchmarks