Apache OpenWhisk is an open source, distributed Serverless platform that executes functions (fx) in response to events at any scale. The OpenWhisk platform supports a programming model in which developers write functional logic (called Actions), in any supported programming language, that can be dynamically scheduled and run in response to associated events (via Triggers) from external sources (Feeds) or from HTTP requests.
This repository supports deploying OpenWhisk to Kubernetes. It contains a Helm chart that can be used to deploy the core OpenWhisk platform and optionally some of its Event Providers to both single-node and multi-node Kubernetes clusters.
The same Helm chart can also be used to deploy OpenWhisk to
OKD/OpenShift via a strategy of using helm template
to
generate yaml that is then fed to the oc
cli. There are some
rough edges still in this process, we would welcome community
contributions to help improve the targeting of OKD/OpenShift and
document the necessary steps.
- Prerequisites: Kubernetes and Helm
- Deploying OpenWhisk
- Administering OpenWhisk
- Development and Testing OpenWhisk on Kubernetes
- Cleanup
- Issues
Kubernetes is a container orchestration platform that automates the deployment, scaling, and management of containerized applications. Helm is a package manager for Kubernetes that simplifies the management of Kubernetes applications. You do not need to have detailed knowledge of either Kubernetes or Helm to use this project, but you may find it useful to review their basic documentation to become familiar with their key concepts and terminology.
Your first step is to create a Kubernetes cluster that is capable of supporting an OpenWhisk deployment. Although there are some technical requirements that the Kubernetes cluster must satisfy, any of the options described below is acceptable.
The simplest way to get a small Kubernetes cluster suitable for development and testing is to use one of the Docker-in-Docker approaches for running Kubernetes directly on top of Docker on your development machine. Configuring Docker with 4GB of memory and 2 virtual CPUs is sufficient for the default settings of OpenWhisk. Depending on your host operating system, we recommend the following:
- MacOS: Use the built-in Kubernetes support in Docker for Mac version 18.06 or later. Please follow our setup instructions to initially create your cluster.
- Linux: Use kind. Please follow our setup instructions to initially create your cluster.
- Windows: Use the built-in Kubernetes support in Docker for Windows version 18.06 or later. Please follow our setup instructions to initially create your cluster.
You can also provision a Kubernetes cluster from a cloud provider, subject to the cluster meeting the technical requirements. You will need at least 1 worker node with 4GB of memory and 2 virtual CPUs to deploy the default configuration of OpenWhisk. You can deploy to significantly larger clusters by scaling up the replica count of the various components and labeling multiple nodes as invoker nodes. We have detailed documentation on using Kubernetes clusters from the following major cloud providers:
We would welcome contributions of documentation for Azure (AKS) and any other public cloud providers.
You will need at least 1 worker node with 4GB of memory and 2 virtual CPUs to deploy the default configuration of OpenWhisk. You can deploy to significantly larger clusters by scaling up the replica count of the various components and labeling multiple nodes as invoker nodes. For more detailed documentation, see:
If you are comfortable with building your own Kubernetes clusters and deploying services with ingresses to them, you should also be able to deploy OpenWhisk to a do-it-yourself cluster. Make sure your cluster meets the technical requirements. You will need at least 1 worker node with 4GB of memory and 2 virtual CPUs to deploy the default configuration of OpenWhisk. You can deploy to significantly larger clusters by scaling up the replica count of the various components and labeling multiple nodes as invoker nodes.
Additional more detailed instructions:
Helm is a tool to simplify the deployment and management of applications on Kubernetes clusters. The OpenWhisk Helm chart requires the Helm 3.
Our automated testing currently uses Helm v3.2.4
Follow the Helm install instructions for your platform to install Helm v3.0.1 or newer.
Now that you have your Kubernetes cluster and have installed the Helm 3 CLI, you are ready to deploy OpenWhisk.
You will use Helm to deploy OpenWhisk to your Kubernetes cluster. There are four deployment steps that are described in more detail below in the rest of this section.
- Initial cluster setup. You will label your Kubernetes worker nodes to indicate their intended usage by OpenWhisk.
- Customize the deployment. You will
create a
mycluster.yaml
that specifies key facts about your Kubernetes cluster and the OpenWhisk configuration you wish to deploy. - Deploy OpenWhisk with Helm. You will use Helm and
mycluster.yaml
to deploy OpenWhisk to your Kubernetes cluster. - Configure the
wsk
CLI. You need to tell thewsk
CLI how to connect to your OpenWhisk deployment.
Indicate the Kubernetes worker nodes that should be used to execute
user containers by OpenWhisk's invokers. Do this by labeling each node with
openwhisk-role=invoker
. In the default configuration, which uses the
KubernetesContainerFactory, the node labels are used in conjunction
with Pod affinities to inform the Kubernetes scheduler how to place
work so that user actions will not interfere with the OpenWhisk
control plane. When using the non-default DockerContainerFactory,
OpenWhisk assumes it has exclusive use of these invoker nodes and
will schedule work on them directly, completely bypassing the Kubernetes
scheduler. For a single node cluster, simply do
kubectl label nodes --all openwhisk-role=invoker
If you have a multi-node cluster, then for each node <INVOKER_NODE_NAME> you want to be an invoker, execute
$ kubectl label nodes <INVOKER_NODE_NAME> openwhisk-role=invoker
If you are targeting OKD/OpenShift, use the command
oc label node <INVOKER_NODE_NAME> openwhisk-role=invoker
For more precise control of the placement of the rest of OpenWhisk's
pods on a multi-node cluster, you can optionally label additional
non-invoker worker nodes. Use the label openwhisk-role=core
to indicate nodes which should run the OpenWhisk control plane
(the controller, kafka, zookeeeper, and couchdb pods).
If you have dedicated Ingress nodes, label them with
openwhisk-role=edge
. Finally, if you want to run the OpenWhisk
Event Providers on specific nodes, label those nodes with
openwhisk-role=provider
.
If the Kubernetes cluster does not allow you to assign a label to a node, or you cannot use the affinity attribute, you can disable it. Please note that it is suitable for testing purposes only and may interfere with the OpenWhisk control plane.
You can disable affinity by editing the mycluster.yaml
file:
affinity:
enabled: false
invoker:
options: "-Dwhisk.kubernetes.user-pod-node-affinity.enabled=false"
You must create a mycluster.yaml
file to record key aspects of your
Kubernetes cluster that are needed to configure the deployment of
OpenWhisk to your cluster. For details, see the documentation
appropriate to your Kubernetes cluster:
- Docker for Mac
- Docker for Windows
- kind
- IBM Kubernetes Service (IKS)
- IBM Cloud Private (ICP)
- Google (GKE)
- Amazon (EKS)
- OKD/OpenShift
Beyond the Kubernetes cluster specific configuration information,
the mycluster.yaml
file is also used
to customize your OpenWhisk deployment by enabling optional features
and controlling the replication factor of the various microservices
that make up the OpenWhisk implementation. See the configuration
choices documentation for a
discussion of the primary options.
For simplicity, in this README, we have used owdev
as the release name and
openwhisk
as the namespace into which the Chart's resources will be deployed.
You can use a different name and/or namespace simply by changing the commands
used below.
NOTE: Clone the repository https://github.com/apache/openwhisk-deploy-kube.git and use to Helm chart available under the helm/openwhisk
folder.
Deployment can be done by using the following single command:
helm install owdev ./helm/openwhisk -n openwhisk --create-namespace -f mycluster.yaml
NOTE: The above command will only work for Helm v3.2.0 or higher versions. Verfiy your local Helm version with the command helm version
.
Deploying to OKD/OpenShift uses the command sequence:
helm template owdev ./helm/openwhisk -n openwhisk -f mycluster.yaml > owdev.yaml
oc create -f owdev.yaml
The two step sequence is currently required because the oc
command must be
used to create the Route
resource specified in the generated owdev.yaml
.
We recommend generating to a file to make it easier to undeploy OpenWhisk later
by simply doing oc delete -f owdev.yaml
You can use the command helm status owdev -n openwhisk
to get a summary
of the various Kubernetes artifacts that make up your OpenWhisk
deployment. Once the pod name containing the word install-packages
is in the Completed
state,
your OpenWhisk deployment is ready to be used.
NOTE: You can check the status of the pod by running the following command kubectl get pods -n openwhisk --watch
.
Configure the OpenWhisk CLI, wsk, by setting the auth and apihost
properties (if you don't already have the wsk cli, follow the
instructions here
to get it). Replace whisk.ingress.apiHostName
and whisk.ingress.apiHostPort
with the actual values from your mycluster.yaml.
wsk property set --apihost <whisk.ingress.apiHostName>:<whisk.ingress.apiHostPort>
wsk property set --auth 23bc46b1-71f6-4ed5-8c54-816aa4f8c502:123zO3xZCLrMN6v2BKK1dXYFpXlPkccOFqm12CdAsMgRU4VrNZ9lyGVCGuMDGIwP
The docker0
network interface does not exist in the Docker for Mac/Windows
host environment. Instead, exposed NodePorts are forwarded from localhost
to the appropriate containers. This means that you will use localhost
instead of whisk.ingress.apiHostName
when configuring
the wsk
cli and replace whisk.ingress.apiHostPort
with the actual values from your mycluster.yaml.
wsk property set --apihost localhost:<whisk.ingress.apiHostPort>
wsk property set --auth 23bc46b1-71f6-4ed5-8c54-816aa4f8c502:123zO3xZCLrMN6v2BKK1dXYFpXlPkccOFqm12CdAsMgRU4VrNZ9lyGVCGuMDGIwP
Your OpenWhisk installation should now be usable. You can test it by following these instructions to define and invoke a sample OpenWhisk action in your favorite programming language.
You can also issue the command helm test owdev -n openwhisk
to run the basic
verification test suite included in the OpenWhisk Helm chart.
Note: if you installed self-signed certificates, which is the default
for the OpenWhisk Helm chart, you will need to use wsk -i
to
suppress certificate checking. This works around cannot validate certificate
errors from the wsk
CLI.
If your deployment is not working, check our troubleshooting guide for ideas.
Using defaults, your deployment is configured to provide a bare-minimum working platform for testing and exploration. For your specialized workloads, you can scale-up your openwhisk deployment by defining your deployment configurations in your mycluster.yaml
which overrides the defaults in helm/openwhisk/values.yaml
. Some important parameters to consider (for other parameters, check helm/openwhisk/values.yaml
and configurationChoices):
actionsInvokesPerminute
: limits the maximum number of invocations per minute.actionsInvokesPerminute
: limits the maximum concurrent invocations.containerPool
: total memory available perinvoker
instance.Invoker
uses this memory to create containers for user-actions. The concurrency-limit (actions running in parallel) will depend upon the total memory configured forcontainerPool
and memory allocated per action (default:
256mb per container).
For more information about increasing concurrency-limit, check scaling-up your deployment.
Wskadmin is the tool to perform various administrative operations against an OpenWhisk deployment.
Since wskadmin requires credentials for direct access to the database (that is not normally accessible to the outside), it is deployed in a pod inside Kubernetes that is configured with the proper parameters. You can run wskadmin
with kubectl
. You need to use the <namespace>
and the deployment <name>
that you configured with --namespace
and --name
when deploying.
You can then invoke wskadmin
with:
kubectl -n <namespace> -ti exec <name>-wskadmin -- wskadmin <parameters>
For example, is your deployment name is owdev
and the namespace is openwhisk
you can list users in the guest
namespace with:
$ kubectl -n openwhisk -ti exec owdev-wskadmin -- wskadmin user list guest
23bc46b1-71f6-4ed5-8c54-816aa4f8c502:123zO3xZCLrMN6v2BKK1dXYFpXlPkccOFqm12CdAsMgRU4VrNZ9lyGVCGuMDGIwP
Check here for details about the available commands.
This section outlines how common OpenWhisk development tasks are supported when OpenWhisk is deployed on Kubernetes using Helm.
Some key differences in a Kubernetes-based deployment of OpenWhisk are
that deploying the system does not generate a whisk.properties
file and
that the various internal microservices (invoker
, controller
,
etc.) are not directly accessible from the outside of the Kubernetes cluster.
Therefore, although you can run full system tests against a
Kubernetes-based deployment by giving some extra command line
arguments, any unit tests that assume direct access to one of the internal
microservices will fail. First clone the core OpenWhisk repository
locally and set $OPENWHISK_HOME
to its top-level directory. Then, the
system tests can be executed in a
batch-style as shown below, where WHISK_SERVER and WHISK_AUTH are
replaced by the values returned by wsk property get --apihost
and
wsk property get --auth
respectively.
cd $OPENWHISK_HOME
./gradlew :tests:testSystemKCF -Dwhisk.auth=$WHISK_AUTH -Dwhisk.server=https://$WHISK_SERVER -Dopenwhisk.home=`pwd`
You can also launch the system tests as JUnit test from an IDE by adding the same system properties to the JVM command line used to launch the tests:
-Dwhisk.auth=$WHISK_AUTH -Dwhisk.server=https://$WHISK_SERVER -Dopenwhisk.home=`pwd`
NOTE: You need to install JDK 8 in order to run these tests.
If you are using Kubernetes in Docker, it is
straightforward to deploy local images by adding a stanza to your
mycluster.yaml. For example, to use a locally built controller image,
just add the stanza below to your mycluster.yaml
to override the default
behavior of pulling a stable openwhisk/controller
image from Docker Hub.
controller:
imageName: "whisk/controller"
imageTag: "latest"
You can use the helm upgrade
command to selectively redeploy one or
more OpenWhisk componenets. Continuing the example above, if you make
additional changes to the controller source code and want to just
redeploy it without redeploying the entire OpenWhisk system you can do
the following:
# Execute these commands in your openwhisk directory
./gradlew distDocker
docker tag whisk/controller whisk/controller:v2
Then, edit your mycluster.yaml
to contain:
controller:
imageName: "whisk/controller"
imageTag: "v2"
Redeploy with Helm by executing this commaned in your openwhisk-deploy-kube directory:
helm upgrade owdev ./helm/openwhisk -n openwhisk -f mycluster.yaml
To have a lean setup (no Kafka, Zookeeper and no Invokers as separate entities):
controller:
lean: true
Use the following command to remove all the deployed OpenWhisk components:
helm uninstall owdev -n openwhisk
By default, helm uninstall
removes the history of previous deployments.
If you want to keep the history, add the command line flag --keep-history
.
For OpenShift deployments, you cannot use helm uninstall
to remove the OpenWhisk
deployment because we did not do a helm install
.
If you saved the output from helm template
into a file,
you can simply use that file as an argument to oc delete
. If you
did not save the file, you can redo the helm template
command and
feed the generated yaml into an oc delete
command.
If your OpenWhisk deployment is not working, check our troubleshooting guide for ideas.
Report bugs, ask questions and request features here on GitHub.
You can also join our slack channel and chat with developers. To get access to our slack channel, request an invite here.