This is not an officially supported Google product.
Kubernetes Operator for Apache Flink is a control plane for running Apache Flink on Kubernetes.
-
Ask questions, report bugs or propose features here or join our Slack channel.
-
Check out who is using the Kubernetes Operator for Apache Flink.
Beta
The operator is under active development, backward compatibility of the APIs is not guaranteed for beta releases.
- Version >= 1.15 of Kubernetes
- Version >= 1.15 of kubectl (with kustomize)
- Version >= 1.7 of Apache Flink
The Kubernetes Operator for Apache Flink extends the vocabulary (e.g., Pod, Service, etc) of the Kubernetes language with custom resource definition FlinkCluster and runs a controller Pod to keep watching the custom resources. Once a FlinkCluster custom resource is created and detected by the controller, the controller creates the underlying Kubernetes resources (e.g., JobManager Pod) based on the spec of the custom resource. With the operator installed in a cluster, users can then talk to the cluster through the Kubernetes API and Flink custom resources to manage their Flink clusters and jobs.
- Support for both Flink job cluster and session cluster depending on whether a job spec is provided
- Custom Flink images
- Flink and Hadoop configs and container environment variables
- Init containers and sidecar containers
- Remote job jar
- Configurable namespace to run the operator in
- Configurable namepsace to watch custom resources in
- Configurable access scope for JobManager service
- Taking savepoints periodically
- Taking savepoints on demand
- Restarting failed job from the latest savepoint automatically
- Cancelling job with savepoint
- Cleanup policy on job success and failure
- GCP integration (service account, GCS connector, networking)
- Support for Beam Python jobs
The operator is still under active development, there is no Helm chart available yet. You can follow either
- User Guide to deploy a released operator image on
gcr.io/flink-operator
to your Kubernetes cluster or - Developer Guide to build an operator image first then deploy it to the cluster.
- Manage savepoints
- Use remote job jars
- Run Apache Beam Python jobs
- Use GCS connector
- Test with Apache Kafka
Please check CONTRIBUTING.md and the Developer Guide out.