Kind Load Balancer
aojea opened this issue ยท 30 comments
/kind design
See previous discussions including #691 (comment)
Docker for Linux you can deploy something like metallb and have fun today. To make something portable we ship by default with kind, you will need to solve the networking problems on docker for windows, Mac etc. And design it such that we can support EG ignite or kata later..
This is in the backlog until someone proposes and proves a workable design.
/priority backlog
see also though in the meantime: https://mauilion.dev/posts/kind-metallb/
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Another interesting project https://github.com/alexellis/inlets-operator#video-demo
It has some examples with kind. https://github.com/alexellis/inlets-operator#run-the-go-binary-with-packetcom
/remove-lifecycle stale
@BenTheElder Hey Ben - do you think any ETA for this feature can be set? I wonder whether I can try to help here.
There is no ETA because it needs a workable design to be agreed upon. So far we don't have one.
This is another work around https://gist.github.com/alexellis/c29dd9f1e1326618f723970185195963
hehe I think this is the simplest and bash scriptable one
# expose the service
kubectl expose deployment hello-world --type=LoadBalancer
# assign an IP to the load balancer
kubectl patch service hello-world -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.31.71.218"]}}'
# it works now
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort fd00:10:96::3237 <none> 8080:32677/TCP 13m
hello-world LoadBalancer fd00:10:96::98a5 172.31.71.218 8080:32284/TCP 5m47s
kubernetes ClusterIP fd00:10:96::1 <none> 443/TCP 22m
wow, still simpler
kubectl expose deployment hello-world --name=testipv4 --type=LoadBalancer --external-ip=6.6.6.6
$kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort fd00:10:96::3237 <none> 8080:32677/TCP 27m
hello-world LoadBalancer fd00:10:96::98a5 172.31.71.218 8080:32284/TCP 20m
kubernetes ClusterIP fd00:10:96::1 <none> 443/TCP 37m
testipv4 LoadBalancer fd00:10:96::4236 6.6.6.6 8080:30164/TCP 6s
and using this script to set the ingress IP (see comment #702 (comment))
https://gist.github.com/aojea/94e20cda0f4e4de16fe8e35afc678732
@aojea That's not a load balancer, external IP can be set regardless of service type. If load balancer controller is active, the ingress entries should appear in the service status field.
For me I'd love a similar solution to minikube tunnel
. I test multiple services exposed via an istio's ingress-gateway and use DNS for resolution with fixed ports. The DNS config is automated because after running minikube.tunnel
my script grabs the external IP and updates the DNS records.
@aojea and I briefly discussed some prototypes for this, but not ready to move on anything yet.
we link to the metallb guide here https://kind.sigs.k8s.io/docs/user/resources/#how-to-use-kind-with-metalllb
fwiw metallb also runs some CI with kind last I checked, but, linux only still.
see also though in the meantime: https://mauilion.dev/posts/kind-metallb/
Provided Info is a bit outdated now.
This is how I managed to get it working on latest version:
$ cat << EOF | kind create cluster --image kindest/node:v1.18.2@sha256:7b27a6d0f2517ff88ba444025beae41491b016bc6af573ba467b70c5e8e0d85f --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
# 1 control plane node and 3 workers
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
EOF
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
On first install only
$ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.20.255.1-172.20.255.250
EOF
NOTE: 172.20.x.x are not used IPs in the network range created by kind for the cluster (docker network inspect kind)
To check the installation and configuration:
$ cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: echo
spec:
replicas: 3
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: inanimate/echo-server
ports:
- containerPort: 8080
EOF
$ kubectl expose replicaset echo --type=LoadBalancer
$ kubectl get svc echo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echo LoadBalancer 10.109.194.17 172.20.255.1 8080:30256/TCP 151m
$ curl http://172.20.255.1:8080
I've been using this for 6-7 months now and it's been working pretty well for me.
-- https://github.com/Xtigyro/kindadm
If you are trying to get this working on docker for windows (probably will work for mac to)
very similar to @rubensa 's comment #702 (comment)
except for the address you need
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 127.0.0.240/28
EOF
and then you can expose the service via
kubectl port-forward --address localhost,0.0.0.0 service/echo 8888:8080
may update my fork of @Xtigyro 's repo with the setup once I get it working properly with that
update: did it https://github.com/williscool/deploy-kubernetes-kind
adding to what @rubensa posted, this will auto-detect the correct address range for your Kind network:
network=$(docker network inspect kind -f "{{(index .IPAM.Config 0).Subnet}}" | cut -d '.' -f1,2)
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- $network.255.1-$network.255.250
EOF
For macOS at least I've found that I can hackily get this to work by using an external docker container that runs socat
within the kind
network. This is relatively easy to automate by using a controller in-cluster and as long as all the kind nodes that could run the controller have the docker sock mounted, unfortunately, that is fairly insecure, then the controller is able to deploy a new docker image outside of the kind cluster binding to 127.0.0.1
on the macOS host and replicates the NodePort
through to the host OS. While not a "real" loadbalancer it suffices for the case so you don't have to run port-forwarding
to get access to normally exposed services.
Behind the scenes the controller is really just starting/stopping/updating a docker image that looks something like this:
docker run -d --restart always \
--name kind-kind-proxy-31936 \
--publish 127.0.0.1:31936:31936 \
--link kind-control-plane:target \
--network kind \
alpine/socat -dd \
tcp-listen:31936,fork,reuseaddr tcp-connect:target:31936
You still need to look up the proper ports to route to but it works for both NodePort and for LoadBalancers. The Proof of concept controller I wrote will handle the normal operations like updating the status with the Status.Ingresses[].IP
& Status.Ingresses[].Hostname
. It's also nice cause it doesn't require anything special out of the kind setup except for the extra volume mounts. EG. Something like this:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
- role: worker
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
- role: worker
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
I'm wondering if this would be of use to anyone else?
forgot to mention we have a dedicated metallb guide for linux at least. https://kind.sigs.k8s.io/docs/user/loadbalancer/
I think @aojea and @mauilion had some other ideas to discuss as well?
@christopherhein that sounds pretty useful, I'm not sure if it's the direction we should progress here for the reasons you mentioned but I'm sure some users would be interested anyhow.
Yeah, the insecurity side of things with the docker socket is kind of annoying, it would be easy to reduce the risk by only scheduling on control plane nodes and limiting the docker mount to that kind node but still. hard part for me would be figuring out where I could share this code base :)
Does the guide still work now that docker is no longer used under-the-hood in the latest version of kind? Absolutely guessing at what's wrong with my setup but I've used this before, following the docs around load balancers, and it just doesn't seem to work anymore.
(I'm running kind in docker, but I can see that kind is running containerd underneath... if the bridge network belongs to docker on the outside, I don't see how containerd can talk on it from inside, but I'm not a networking expert, the errors I'm getting are "no route to host")
The docker machine itself shows up in my arp tables and responds to pings:
? (172.18.0.2) at 02:42:ac:12:00:02 [ether] on br-d9ef30b68bc8
but the load balancers I created in the same IP range in ARP mode seem to be no-shows:
? (172.18.255.201) at <incomplete> on br-d9ef30b68bc8
$ curl https://172.18.255.201
curl: (7) Failed to connect to 172.18.255.201 port 443: No route to host
I'm happy to try it on a different earlier version although I can't tear this one down right now, I just wondered if anyone already observed an issue with this configuration recently and it just maybe hasn't been logged.
FWIW, I did find this issue logged against metallb:
which suggested disabling IPv6 in order to reach the load balancer IPs, and I am now having success with that method as I write this... (I'm at least able to reach one of my load balancers now, from the host and from the other side of tailnet, and the other one is not being advertised, as far as I can tell that's a problem downstream not related to metallb...)
Does the guide still work now that docker is no longer used under-the-hood in the latest version of kind?
To clarify:
KIND nodes are docker or podman containers which run containerd inside.
KIND switched to containerd inside of the node containers before this issue was even filed, back in may 2019.
The guide has received many updates and should be working.
The rest of your comment is somewhat ambiguous due to loose terminology around e.g. "in docker" (usually means inside a docker container, but given context I think you just mean docker nodes not podman nodes) or "docker machine" (I think you mean a node container but could be the VM or physical machine in which you're running docker which then has the node containers).
Please file as a new support or bug issue with all the details of your host environment, to keep discussion organized and provide enough details to help diagnose.
I don't have an issue at this point and I don't know that this issue needs to remain open either, although I am not sure I read the full history, I came to this issue and reported here because I was having trouble and it was open, so from my perspective it was ambiguous whether the guide should be expected to work.
I'd suggest to close this issue if nobody is actively having any problems that can really be attributed to kind now. Farming out the feature to metallb and covering it with docs on the KinD side seems like all that is needed.
It is documented, and the documentation is super clear. No action needed from my perspective. Otherwise sorry for contributing to the noise floor, if your mind is made up that Kind should support load balancers in a more direct or first-class way, I think the support as it is today is just fine.
No worries, I just can't tell enough to debug your specific issue yet, and we should have that discussion separately. If you need help with that in the future, please do file one and we'll try to help figure it out.
As far as this issue, the severe limitations on mac and windows are still problematic, most cluster tools provide a working reachable loadbalancer out of the box, it would be great to ship something that handled this more intelligently, we just haven't had the time yet.
Theoretically, one could write a custom controller and tunnel the traffic to the host + the docker network simultaneously with some careful hackery, and consider making it a standard part of kind clusters.
@christopherhein thanks for the hint about the socat
container. I was able to run my local setup combining your proposal with metal-lb
on kind on macOS: https://github.com/ReToCode/local-kind-setup.