WARNING: Pebble as an ACME server and this Helm chart is only meant for testing purposes, it is not secure and not meant for production.
Pebble is an ACME server like Let's Encrypt. ACME servers can provide TLS certificates for HTTP over TLS (HTTPS) to ACME clients that are able to prove control over a domain name through an ACME challenge.
This Helm chart makes it easy to install Pebble in a Kubernetes cluster using Helm. While we recommend using Kubernetes internal DNS functionalities, this Helm chart can also deploy pebble-challtestsrv that for example can act as a configurable DNS server.
To test interactions against an ACME server like Let's Encrypt from an unreachable CI environment like most ephemeral CI environments are, using Let's Encrypts staging environment likely won't work, at least if you are using the HTTP-01 ACME challenge.
In the commonly used HTTP-01 ACME challenge, an ACME client proves its control of a domain's web server. During this challenge, the ACME server will lookup the domain name's IP and make a web request to it, and that's the problem! In an ephemeral CI environment, it is likely impossible to receive new incoming connections from Let's Encrypt's servers.
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
helm install pebble jupyterhub/pebble
A packaged Helm chart contain the sub-charts Helm templates, and is therefore not recommended for Helm charts being packaged for distribution.
Installing Pebble as part of another chart should likely be made conditional using tags or conditions.
# Chart.yaml - Helm 3 only, see note below for Helm 2 use.
apiVersion: v2
name: my-chart
# ...
dependencies:
- name: pebble
version: 0.1.0
repository: https://jupyterhub.github.io/helm-chart/
tags:
- ci
NOTE: Helm 3 support
Chart.yaml
files withapiVersion: v2
, and there you can specify chart dependencies directly. If you want to remain compatible with Helm 2 yourChart.yaml
file has to haveapiVersion: v1
and the chart dependencies need to be specified in a separaterequirements.yaml
file.
Helm charts render templates into Kubernetes yaml files using configurable values. A Helm chart comes with default values, and these can be overridden during chart installation and upgrades, for example with the --values
flag to pass a YAML file or with the --set
flag.
To configure the Pebble Helm chart, create a my-values.yaml
file to pass with --values
. If you have installed it as a sub-chart, you should nest the configuration.
Pebble is developed to test ACME clients and ensure they are robust, so it can intentionally act mischievous.
The default of this Helm chart seen below configures Pebble to ensure speedy certificate acquisition. Note that if you provide an array to pebble.env
, it will override the default array of environment variables.
pebble:
env:
# ref: https://github.com/letsencrypt/pebble#testing-at-full-speed
- name: PEBBLE_VA_NOSLEEP
value: "1"
See Pebble's documentation for more info about its mischievous behavior.
Pebble will connect with a domain's web-server during HTTP-01 (80) and TLS-ALPN-01 (443) challenges with specific ports, and you can configure those. This is useful if your web-server is behind a Kubernetes service exposing it on port 8080 for example.
pebble:
config:
pebble:
httpPort: 80 # this is the port where outgoing HTTP-01 challenges go
tlsPort: 443 # this is the port where outgoing TLS-ALPN-01 challenges go
The ACME client should be configured to work against the Pebble ACME server. The ACME client also needs to explicitly trust a root TLS certificates that has signed the leaf TLS certificate used by Pebble for the ACME communication which will be made over HTTPS.
The ACME client should communicate with something like https://pebbles-service-name.pebbles-namespace/dir
. The namespace part can be omitted if Pebble is in the same namespace as the ACME client, and pebbles-service-name
can be found with kubectl get svc --all-namespaces | grep pebble
.
WARNING: All HTTPS communication should be treated as unsafe HTTP communication! This is only meant for testing!
The ACME client and anything communicating with Pebble's actual ACME Server or management REST API needs to trust this root certificate. Its associated publicly exposed key has signed the leaf certificate Pebble will use the HTTPS communication on port 443 (Pebble's ACME server) and 8080 (Pebble's REST API with /roots/0
etc.).
The other root certificate is what Pebble use to sign certificates for its ACME clients. Pebble recreate this root certificate on startup and expose it and its associated key through the management REST API on https://pebble:8080/roots/0
without any authorization.
The ACME client will need the root certificate to trust, and be configured to trust it.
A Kubernetes ConfigMap can contain the root certificate to trust, and then be mounted as a file in the ACME client's pod's container.
If the Pebble Helm chart is installed in the ACME client's namespace, we can reuse a ConfigMap from it that contains the root certificate to trust. The ConfigMap's name can be found with kubectl get cm --all-namespaces | grep pebble
.
Otherwise, you can create create a ConfigMap with the root certificate like this.
cat <<EOF | kubectl apply --namespace <namespace-of-acme-client> -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: pebble
data:
root-cert.pem: |
-----BEGIN CERTIFICATE-----
MIIDSzCCAjOgAwIBAgIIOvR7X+wFgKkwDQYJKoZIhvcNAQELBQAwIDEeMBwGA1UE
AxMVbWluaWNhIHJvb3QgY2EgM2FmNDdiMCAXDTIwMDQyNjIzMDYxNloYDzIxMjAw
NDI3MDAwNjE2WjAgMR4wHAYDVQQDExVtaW5pY2Egcm9vdCBjYSAzYWY0N2IwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCaoISLUOImo7vm7sGUpeycouDP
TcJj6CxfCbvBsrlAg8ERGIph9H7TuDnTVk46pOaoxByGlwvvh4qR/Dled+G8NCt5
s0r0yemY/fx1grm1TmcJRO+A1P5kx/M9hy+kVcyLRvPOnvo8Thj/4zvaJDh+pSjt
5oAQvOHt9hYwGkkvSsZw12cTUuCsbypQ4lapDSeAjp3pNlqFcWmCvF9Ib3URDybN
JWhY6yQQe54D2LxYqxCfYZjKhNbaxlNTlHu0Ujy75I/AdSjK6DljAZh0OimuQNEm
FyXWvpnfyHbV5f0mMiXIOo2FY8izSD7cyFagmr0XvymCtxeDK1+MvT2pM+rXAgMB
AAGjgYYwgYMwDgYDVR0PAQH/BAQDAgKEMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggr
BgEFBQcDAjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdDgQWBBR0MgecNe4RY575
qAtIt6zAjbBqLTAfBgNVHSMEGDAWgBR0MgecNe4RY575qAtIt6zAjbBqLTANBgkq
hkiG9w0BAQsFAAOCAQEAjtGjoXGRG7586vyT3XcJBa8y9MOsDhQGOec23h40NJCn
SPF28bmTIaWhB+Hv8G+Mkyf9Ov3L5L/mH0VGvZUkMAnSdT4vaMYGrTvMtYGS/8ew
lPnlSJ3oO9Kz9zfOneoPDD1OGkV0Oq3wLn9cq6jQgItEeACsXNtaogXJxYhvxiV1
1k/gjXmG9pvFpb0A1bw6apxGftIViDKrPR2P/pG3QAuLKywQiNxZ5odf3kvKdZmJ
hLbu119My9XiiWhNegufcRNRNEnKJ5AQsBEwLEnD4oeIZmFvYVKOPjfWRV5qczVi
mUPjtQv88HhlgX/lBVWJ2VONlFWVoOreZz4GkAm5bA==
-----END CERTIFICATE-----
EOF
# ... within a Pod specification
volumes:
- name: pebble-root-cert
configMap:
name: pebble
## ... if the Pebble chart was installed as a sub-chart.
#name: {{ .Release.Name }}-pebble
containers:
- name: my-container-with-an-acme-client
# ...
volumeMounts:
- name: pebble-root-cert
subPath: root-cert.pem
mountPath: /etc/pebble/root-cert.pem
Configuring the ACME client to trust a certain provided root certificate will depend on the ACME client. But as an example, a popular ACME client in Kubernetes contexts is LEGO. LEGO can be configured to trust a root certificate and its signed leaf certificates if a file path is provided through the LEGO_CA_CERTIFICATES
environment variable.
# ... within a Pod specification template of a Helm chart
containers:
- name: my-container-with-a-lego-acme-client
# ...
env:
- name: LEGO_CA_CERTIFICATES
value: /etc/pebble/root-cert.pem
Pebble as an ACME server needs to resolve domain names to where the ACME client can receive traffic. This can be done in various ways, and it is not apparent what makes the most sense for your setup. We recommend the basic or intermediary options below.
If you don't need your ACME client to have a specific domain name (mydomain.test
), you could test against the domain name of the ACME client's Kubernetes Service (mysvc.mynamespace
). For example, if an ACME client is running in a pod targeted by the Kubernetes service called client-svc
in the namespace client-namespace
, then you could use client-svc
and/or client-svc.client-namespace
as domain names.
An upside of this approach is that any pod in Kubernetes will be able to find its to the actual web-server using the domain name, and not only those like Pebble using the configurable DNS server.
A downside of this approach is that requests must go to this specific domain name as it is the only reference to the Kubernetes service. This can be resolved in two ways.
You can configure the Kubernetes wide DNS server to point a set of domain names to this Kubernetes Service's domain name with CNAME records. While kube-dns was common before, CoreDNS is a common DNS server today.
CoreDNS is configured through a Corefile, which could very well be mounted through a configmap called coredns
with a key called Corefile
in the kube-system
namespace. If that is the case, we can simply modify this ConfigMap and wait a while and then we will see the change.
Here is a Corefile part of the corefile
configmap in kube-system
that I got as part of starting a k3s Kubernetes cluster.
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
hosts /etc/coredns/NodeHosts {
reload 1s
fallthrough
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
By modifying this Corefile to have the following section below the line with "ready", we will make all lookups of any.thing.test
resolve to the IP of client-svc.client-namespace.svc.cluster.local
.
template ANY ANY test {
match "^([^\.]+\.)*test\.$"
answer "{{ .Name }} 60 IN CNAME client-svc.client-namespace.svc.cluster.local"
upstream
}
It is possible to do this modification with kubectl edit configmap -n kube-system corefile
but also by a command which is suitable in a script for CI environments.
kubectl get configmap -n kube-system coredns -o jsonpath='{.data.Corefile}' \
| sed '/ready$/a \
template ANY ANY test {\
match "^([^\.]+\.)*test\.$"\
answer "{{ .Name }} 60 IN CNAME proxy-public.default.svc.cluster.local"\
upstream\
}' \
> /tmp/Corefile
kubectl create configmap -n kube-system coredns --from-file Corefile=/tmp/Corefile -o yaml --dry-run=client \
| kubectl apply -f -
# clone the git repo
git clone https://github.com/jupyterhub/pebble-helm-chart.git
cd pebble-helm-chart
# setup a local k8s cluster
k3d create --wait 60 --publish 8443:32443 --publish 8080:32080 --publish 8053:32053/udp --publish 8053:32053/tcp --publish 8081:32081
export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')"
# install pebble
helm upgrade pebble ./pebble --install --set challtestsrv.enabled=true
# run a basic health check
helm test pebble
kubectl logs pebble-test -c acme-mgmt
kubectl logs pebble-test -c dns-mgmt
kubectl logs pebble-test -c dns
No changelog or similar yet.
Making a release is as easy as pushing a tagged commit on the master branch.
git tag -a x.y.z -m x.y.z
git push --follow-tags