/aws-virtual-kubelet

Primary LanguageGoApache License 2.0Apache-2.0

Validation Coverage Badge

AWS Virtual Kubelet

AWS Virtual Kubelet provides an extension to your Kubernetes cluster that can provision and maintain EC2 based Pods. These EC2 pods can run arbitrary applications which might not otherwise fit into containers.

This expands the management capabilities of Kubernetes, enabling use-cases such as macOS native application lifecycle control via standard Kubernetes tooling.

Architecture

A typical EKS Kubernetes (k8s) cluster is shown in the diagram below. It consists of a k8s API layer, a number of nodes which each run a kubelet process, and pods (one or more containerized apps) managed by those kubelet processes.

Using the Virtual Kubelet library, this EC2 provider implements a virtual kubelet which looks like a typical kubelet to k8s. API requests to create workload pods, etc. are received by the virtual kubelet and passed to our custom EC2 provider.

This provider implements pod-handling endpoints using EC2 instances and an agent that runs on them. The agent is responsible for launching and terminating "containers" (applications) and reporting status. The provider ↔ agent API contract is defined using the Protocol Buffers spec and implemented via gRPC. This enables agents to be written in any support language and run on a variety of operating systems and architectures 1.

Nodes are represented by ENIs that maintain a predictable IP address used for naming and consistent association of workload pods with virtual kubelet instances.

See Software Architecture for an overview of the code organization and general behavior. For detailed coverage of specific aspects of system/code behavior, see implemented RFCs.

Components

Virtual Kubelet (VK)
Upstream library / framework for implementing custom Kubernetes providers
Virtual Kubelet Provider (VKP)
This EC2-based provider implementation (sometimes referred to as virtual-kubelet or VK also)
Virtual Kubelet Virtual Machine (VKVM)
The Virtual Machine providing compute for this provider implementation (i.e. an Amazon EC2 Instance)
Virtual Kubelet Virtual Machine Agent (VKVMA)
The gRPC agent that exposes an API to manage workloads on EC2 instances (also VKVMAgent, or just Agent)

Mapping to Kubernetes components

kubelet → Virtual Kubelet library + this custom EC2 provider
node → Elastic Network Interface (managed by VKP)
pod → EC2 Instance + VKVMAgent + Custom Workload

Prerequisites

The following are required to build and deploy this project. Additional tools may be needed to utilize examples or set up a development environment.

Go (lang)

Tested with Go v1.12, 1.16, and 1.17. See the Go documentation for installation steps.

Docker

Docker is a container virtualization runtime.

See Get Started in the docker documentation for setup steps.

AWS account

The provider interacts directly with AWS APIs and launches EC2 instances so an AWS account is needed. Click Create an AWS Account at https://aws.amazon.com/ to get started.

AWS command line interface

Some commands utilize the AWS CLI. See the AWS CLI page for installation and configuration instructions.

Kubernetes cluster

EKS is strongly recommended, though any k8s cluster with sufficient access to make AWS API calls and communicate over the network with gRPC agents could work.

Infrastructure QuickStart

To get the needed infrastructure up and running quickly, see the deploy README which details using the AWS CDK Infrastructure-as-Code framework to automatically provision the required resources.

Build

Once the required infrastructure is in place, follow the steps in this section to build the VK provider.

Makefile

This project comes with a Makefile to simplify build-related tasks.

Run make in this directory to get a list of subcommands and their description.

Some commands (such as make push) require appropriately set Environment Variables to function correctly. Review variables at the top of the Makefile with ?= and set in your shell/environment before running these commands.

  1. Run make build to build the project. This will also generate protobuf files and other generated files if needed.
  2. Next run make docker to create a docker image with the virtual-kubelet binary.
  3. Run make push to deploy the docker image to your Elastic Container Registry.

Deploy

Now we're ready to deploy the VK provider using the steps outlined in this section.

Some commands below utilize the kubectl tool to manage and configure k8s. Other tools such as Lens may be used if desired (adapt instructions accordingly).

Example files that require updating placeholders with actual (environment-specific) data are copied to ./local before modification. The local directory's contents are ignored, which prevents accidental commits and leaking account numbers, etc. into the GitHub repo.

Cluster Role and Binding

The ClusterRole and Binding give VK pods the necessary permissions to manage k8s workloads.

  1. Run kubectl apply -f deploy/vk-clusterrole_binding.yaml to deploy the cluster role and binding.

ConfigMap

The ConfigMap provides global and default VK/VKP configuration elements. Some of these settings may be overridden on a per-pod basis.

  1. Copy the provided examples/config-map.yaml to the ./local dir and modify as-needed. See Config for a detailed explanation of the various configuration options.

  2. Next, run kubectl apply -f local/config-map.yaml to deploy the config map.

StatefulSet

This configuration will deploy a set of VK providers using the docker image built and pushed earlier.

  1. Copy the provided examples/vk-statefulset.yaml file to ./local.
  2. Replace these placeholders in the image: reference with the values from your account/environment
    1. AWS_ACCOUNT_ID
    2. AWS_REGION
    3. DOCKER_TAG
  3. Run kubectl apply -f local/vk-statefulset.yaml to deploy the VK provider pods.

Usage

At this point you should have at least one running VK provider pod running successfully. This section describes how to launch EC2-backed pods using the provider.

examples/pods contains both a single (unmanaged) pod example and a pod Deployment example.

NOTE It is strongly recommended that workload pods run via a supervisory management construct such as a Deployment (even for single-instance pods). This will help minimize unexpected loss of pod resources and allow Kubernetes to efficiently use resources.

  1. Copy the desired pod example(s) to ./local 2. Run kubectl apply -f <filename> (replacing <filename> with the actual file name).

See the Cookbook for more usage examples.

Frequently Asked Questions

Why does this project exist?

This project serves as a translation and mediation layer between Kubernetes and EC2-based pods. It was created in order to run custom workloads directly on any EC2 instance type/size available via AWS (e.g. Mac Instances).

How can I use it?

  1. Follow the steps in this README to get all the infrastructure and requirements in place and working with the example agent.
  2. Using the example agent as a guide, implement your own gRPC agent to support the desired workloads.

How can I help?

Take a look at the good first issue Issues. Read the CONTRIBUTING guidelines and submit a Pull Request! 🚀

Are there any known issues and/or planned features or improvements?

Yes. See RFCs for improvement proposals and EdgeCases for known issues / workarounds.

BacklogFodder contains additional items that may become roadmap elements.

Are there metrics I can use to monitor system state / behavior?

Yes. See Metrics for details.

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

Style Guide

Go

gofmt formatting is enforced via GitHub Actions workflow.

Footnotes

  1. A Golang sample agent is included.